Browse Source

I am even stupider than I thought

master
John ShaggyTwoDope Jenkins 5 years ago
parent
commit
93dc23aa45
55 changed files with 2159 additions and 5 deletions
  1. 2
    1
      drive-git/PKGBUILD
  2. BIN
      drive-git/drive-git-r312.231b3d0-1.src.tar.gz
  3. 2
    1
      drive/PKGBUILD
  4. BIN
      drive/drive-0.1.9-4-x86_64.pkg.tar.xz
  5. BIN
      drive/drive-0.1.9-4.src.tar.gz
  6. BIN
      drive/pkg/drive/.MTREE
  7. 4
    3
      drive/pkg/drive/.PKGINFO
  8. 25
    0
      pyanisort-git/PKGBUILD
  9. BIN
      pyanisort-git/pkg/pyanisort-git/.MTREE
  10. 29
    0
      pyanisort-git/pkg/pyanisort-git/.PKGINFO
  11. 10
    0
      pyanisort-git/pkg/pyanisort-git/usr/bin/pyanisort
  12. BIN
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/__pycache__/ez_setup.cpython-34.pyc
  13. BIN
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/__pycache__/ez_setup.cpython-34.pyo
  14. 364
    0
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/ez_setup.py
  15. 308
    0
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyAniSort-1.0.3-py3.4.egg-info/PKG-INFO
  16. 19
    0
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyAniSort-1.0.3-py3.4.egg-info/SOURCES.txt
  17. 1
    0
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyAniSort-1.0.3-py3.4.egg-info/dependency_links.txt
  18. 3
    0
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyAniSort-1.0.3-py3.4.egg-info/entry_points.txt
  19. 2
    0
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyAniSort-1.0.3-py3.4.egg-info/top_level.txt
  20. 9
    0
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__init__.py
  21. BIN
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/__init__.cpython-34.pyc
  22. BIN
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/__init__.cpython-34.pyo
  23. BIN
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/findtitle.cpython-34.pyc
  24. BIN
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/findtitle.cpython-34.pyo
  25. BIN
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/pyanisort.cpython-34.pyc
  26. BIN
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/pyanisort.cpython-34.pyo
  27. BIN
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/seriesmatch.cpython-34.pyc
  28. BIN
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/seriesmatch.cpython-34.pyo
  29. BIN
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/utilities.cpython-34.pyc
  30. BIN
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/utilities.cpython-34.pyo
  31. 31
    0
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/conf/logger.conf
  32. 194
    0
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/findtitle.py
  33. 0
    0
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/logs/pyAniSort.log
  34. 144
    0
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/pyanisort.py
  35. 243
    0
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/seriesmatch.py
  36. 233
    0
      pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/utilities.py
  37. 20
    0
      pyanisort-git/pkg/pyanisort-git/usr/share/licenses/pyanisort-git/LICENSE
  38. BIN
      pyanisort-git/pyanisort-git-r42.057e52b-1-any.pkg.tar.xz
  39. BIN
      pyanisort-git/pyanisort-git-r42.057e52b-1.src.tar.gz
  40. 1
    0
      pyanisort-git/pyanisort/HEAD
  41. 8
    0
      pyanisort-git/pyanisort/config
  42. 1
    0
      pyanisort-git/pyanisort/description
  43. 15
    0
      pyanisort-git/pyanisort/hooks/applypatch-msg.sample
  44. 24
    0
      pyanisort-git/pyanisort/hooks/commit-msg.sample
  45. 8
    0
      pyanisort-git/pyanisort/hooks/post-update.sample
  46. 14
    0
      pyanisort-git/pyanisort/hooks/pre-applypatch.sample
  47. 49
    0
      pyanisort-git/pyanisort/hooks/pre-commit.sample
  48. 53
    0
      pyanisort-git/pyanisort/hooks/pre-push.sample
  49. 169
    0
      pyanisort-git/pyanisort/hooks/pre-rebase.sample
  50. 36
    0
      pyanisort-git/pyanisort/hooks/prepare-commit-msg.sample
  51. 128
    0
      pyanisort-git/pyanisort/hooks/update.sample
  52. 6
    0
      pyanisort-git/pyanisort/info/exclude
  53. BIN
      pyanisort-git/pyanisort/objects/pack/pack-8923f3821c92e199f8a22160cb3d2c989d20c312.idx
  54. BIN
      pyanisort-git/pyanisort/objects/pack/pack-8923f3821c92e199f8a22160cb3d2c989d20c312.pack
  55. 4
    0
      pyanisort-git/pyanisort/packed-refs

+ 2
- 1
drive-git/PKGBUILD View File

@@ -8,7 +8,8 @@ pkgdesc="Drive is a tiny program to pull or push Google Drive files. You need go
arch=('any')
url="https://github.com/odeke-em/drive"
license=('Apache')
makedepends=('go' 'git' 'mercurial' 'gtk-update-icon-cache')
depends=('hicolor-icon-theme' 'gtk-update-icon-cache')
makedepends=('go' 'git' 'mercurial')
conflicts=('drive')
options=('!strip' '!emptydirs')
install=$pkgname.install

BIN
drive-git/drive-git-r312.231b3d0-1.src.tar.gz View File


+ 2
- 1
drive/PKGBUILD View File

@@ -7,7 +7,8 @@ pkgdesc="Pull or push Google Drive files"
arch=('x86_64' 'i686' 'arm' 'armv6h' 'armv7h')
url="http://github.com/odeke-em/drive"
license=('Apache')
makedepends=('go' 'git' 'mercurial' 'gtk-update-icon-cache')
depends=('hicolor-icon-theme' 'gtk-update-icon-cache')
makedepends=('go' 'git' 'mercurial')
conflicts=('drive-git')
options=('!strip' '!emptydirs')
install=$pkgname.install

BIN
drive/drive-0.1.9-4-x86_64.pkg.tar.xz View File


BIN
drive/drive-0.1.9-4.src.tar.gz View File


BIN
drive/pkg/drive/.MTREE View File


+ 4
- 3
drive/pkg/drive/.PKGINFO View File

@@ -1,20 +1,21 @@
# Generated by makepkg 4.2.1
# using fakeroot version 1.20.2
# Mon Apr 27 01:04:10 UTC 2015
# Mon Apr 27 04:08:44 UTC 2015
pkgname = drive
pkgver = 0.1.9-4
pkgdesc = Pull or push Google Drive files
url = http://github.com/odeke-em/drive
builddate = 1430096650
builddate = 1430107724
packager = Unknown Packager
size = 8801280
arch = x86_64
license = Apache
conflict = drive-git
depend = hicolor-icon-theme
depend = gtk-update-icon-cache
makedepend = go
makedepend = git
makedepend = mercurial
makedepend = gtk-update-icon-cache
makepkgopt = !strip
makepkgopt = docs
makepkgopt = !libtool

+ 25
- 0
pyanisort-git/PKGBUILD View File

@@ -0,0 +1,25 @@
# Maintainer: John Jenkins twodopeshaggy@gmail.com

pkgname=pyanisort-git
pkgver=r42.057e52b
pkgrel=1
pkgdesc="Automatically sorts anime using information from anidb.net"
arch=('any')
url="https://github.com/jotaro0010/pyanisort"
license=('MIT')
makedepends=('git')
depends=('python' 'python-setuptools')
source=('git+https://github.com/jotaro0010/pyanisort.git')
sha256sums=('SKIP')

pkgver() {
cd "$srcdir/pyanisort"
printf "r%s.%s" "$(git rev-list --count HEAD)" "$(git rev-parse --short HEAD)"
}

package() {
cd "$srcdir/pyanisort"
python setup.py install --root="$pkgdir/" --optimize=1
mkdir -p $pkgdir/usr/share/licenses/$pkgname
install -m 0644 LICENSE $pkgdir/usr/share/licenses/$pkgname/
}

BIN
pyanisort-git/pkg/pyanisort-git/.MTREE View File


+ 29
- 0
pyanisort-git/pkg/pyanisort-git/.PKGINFO View File

@@ -0,0 +1,29 @@
# Generated by makepkg 4.2.1
# using fakeroot version 1.20.2
# Sat May 2 01:27:28 UTC 2015
pkgname = pyanisort-git
pkgver = r42.057e52b-1
pkgdesc = Automatically sorts anime using information from anidb.net
url = https://github.com/jotaro0010/pyanisort
builddate = 1430530048
packager = Unknown Packager
size = 191488
arch = any
license = MIT
conflict = rtv
depend = ncurses
depend = python
depend = python-six
depend = python-requests
depend = python-praw
depend = python-setuptools
makedepend = git
makepkgopt = strip
makepkgopt = docs
makepkgopt = !libtool
makepkgopt = !staticlibs
makepkgopt = emptydirs
makepkgopt = zipman
makepkgopt = purge
makepkgopt = !upx
makepkgopt = !debug

+ 10
- 0
pyanisort-git/pkg/pyanisort-git/usr/bin/pyanisort View File

@@ -0,0 +1,10 @@
#!/bin/python
# EASY-INSTALL-ENTRY-SCRIPT: 'pyAniSort==1.0.3','console_scripts','pyanisort'
__requires__ = 'pyAniSort==1.0.3'
import sys
from pkg_resources import load_entry_point

if __name__ == '__main__':
sys.exit(
load_entry_point('pyAniSort==1.0.3', 'console_scripts', 'pyanisort')()
)

BIN
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/__pycache__/ez_setup.cpython-34.pyc View File


BIN
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/__pycache__/ez_setup.cpython-34.pyo View File


+ 364
- 0
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/ez_setup.py View File

@@ -0,0 +1,364 @@
#!/usr/bin/env python
"""Bootstrap setuptools installation

To use setuptools in your package's setup.py, include this
file in the same directory and add this to the top of your setup.py::

from ez_setup import use_setuptools
use_setuptools()

To require a specific version of setuptools, set a download
mirror, or use an alternate download directory, simply supply
the appropriate options to ``use_setuptools()``.

This file can also be run as a script to install or upgrade setuptools.
"""
import os
import shutil
import sys
import tempfile
import tarfile
import optparse
import subprocess
import platform
import textwrap

from distutils import log

try:
from site import USER_SITE
except ImportError:
USER_SITE = None

DEFAULT_VERSION = "2.2"
DEFAULT_URL = "https://pypi.python.org/packages/source/s/setuptools/"

def _python_cmd(*args):
"""
Return True if the command succeeded.
"""
args = (sys.executable,) + args
return subprocess.call(args) == 0

def _install(tarball, install_args=()):
# extracting the tarball
tmpdir = tempfile.mkdtemp()
log.warn('Extracting in %s', tmpdir)
old_wd = os.getcwd()
try:
os.chdir(tmpdir)
tar = tarfile.open(tarball)
_extractall(tar)
tar.close()

# going in the directory
subdir = os.path.join(tmpdir, os.listdir(tmpdir)[0])
os.chdir(subdir)
log.warn('Now working in %s', subdir)

# installing
log.warn('Installing Setuptools')
if not _python_cmd('setup.py', 'install', *install_args):
log.warn('Something went wrong during the installation.')
log.warn('See the error message above.')
# exitcode will be 2
return 2
finally:
os.chdir(old_wd)
shutil.rmtree(tmpdir)


def _build_egg(egg, tarball, to_dir):
# extracting the tarball
tmpdir = tempfile.mkdtemp()
log.warn('Extracting in %s', tmpdir)
old_wd = os.getcwd()
try:
os.chdir(tmpdir)
tar = tarfile.open(tarball)
_extractall(tar)
tar.close()

# going in the directory
subdir = os.path.join(tmpdir, os.listdir(tmpdir)[0])
os.chdir(subdir)
log.warn('Now working in %s', subdir)

# building an egg
log.warn('Building a Setuptools egg in %s', to_dir)
_python_cmd('setup.py', '-q', 'bdist_egg', '--dist-dir', to_dir)

finally:
os.chdir(old_wd)
shutil.rmtree(tmpdir)
# returning the result
log.warn(egg)
if not os.path.exists(egg):
raise IOError('Could not build the egg.')


def _do_download(version, download_base, to_dir, download_delay):
egg = os.path.join(to_dir, 'setuptools-%s-py%d.%d.egg'
% (version, sys.version_info[0], sys.version_info[1]))
if not os.path.exists(egg):
tarball = download_setuptools(version, download_base,
to_dir, download_delay)
_build_egg(egg, tarball, to_dir)
sys.path.insert(0, egg)

# Remove previously-imported pkg_resources if present (see
# https://bitbucket.org/pypa/setuptools/pull-request/7/ for details).
if 'pkg_resources' in sys.modules:
del sys.modules['pkg_resources']

import setuptools
setuptools.bootstrap_install_from = egg


def use_setuptools(version=DEFAULT_VERSION, download_base=DEFAULT_URL,
to_dir=os.curdir, download_delay=15):
to_dir = os.path.abspath(to_dir)
rep_modules = 'pkg_resources', 'setuptools'
imported = set(sys.modules).intersection(rep_modules)
try:
import pkg_resources
except ImportError:
return _do_download(version, download_base, to_dir, download_delay)
try:
pkg_resources.require("setuptools>=" + version)
return
except pkg_resources.DistributionNotFound:
return _do_download(version, download_base, to_dir, download_delay)
except pkg_resources.VersionConflict as VC_err:
if imported:
msg = textwrap.dedent("""
The required version of setuptools (>={version}) is not available,
and can't be installed while this script is running. Please
install a more recent version first, using
'easy_install -U setuptools'.

(Currently using {VC_err.args[0]!r})
""").format(VC_err=VC_err, version=version)
sys.stderr.write(msg)
sys.exit(2)

# otherwise, reload ok
del pkg_resources, sys.modules['pkg_resources']
return _do_download(version, download_base, to_dir, download_delay)

def _clean_check(cmd, target):
"""
Run the command to download target. If the command fails, clean up before
re-raising the error.
"""
try:
subprocess.check_call(cmd)
except subprocess.CalledProcessError:
if os.access(target, os.F_OK):
os.unlink(target)
raise

def download_file_powershell(url, target):
"""
Download the file at url to target using Powershell (which will validate
trust). Raise an exception if the command cannot complete.
"""
target = os.path.abspath(target)
cmd = [
'powershell',
'-Command',
"(new-object System.Net.WebClient).DownloadFile(%(url)r, %(target)r)" % vars(),
]
_clean_check(cmd, target)

def has_powershell():
if platform.system() != 'Windows':
return False
cmd = ['powershell', '-Command', 'echo test']
devnull = open(os.path.devnull, 'wb')
try:
try:
subprocess.check_call(cmd, stdout=devnull, stderr=devnull)
except:
return False
finally:
devnull.close()
return True

download_file_powershell.viable = has_powershell

def download_file_curl(url, target):
cmd = ['curl', url, '--silent', '--output', target]
_clean_check(cmd, target)

def has_curl():
cmd = ['curl', '--version']
devnull = open(os.path.devnull, 'wb')
try:
try:
subprocess.check_call(cmd, stdout=devnull, stderr=devnull)
except:
return False
finally:
devnull.close()
return True

download_file_curl.viable = has_curl

def download_file_wget(url, target):
cmd = ['wget', url, '--quiet', '--output-document', target]
_clean_check(cmd, target)

def has_wget():
cmd = ['wget', '--version']
devnull = open(os.path.devnull, 'wb')
try:
try:
subprocess.check_call(cmd, stdout=devnull, stderr=devnull)
except:
return False
finally:
devnull.close()
return True

download_file_wget.viable = has_wget

def download_file_insecure(url, target):
"""
Use Python to download the file, even though it cannot authenticate the
connection.
"""
try:
from urllib.request import urlopen
except ImportError:
from urllib2 import urlopen
src = dst = None
try:
src = urlopen(url)
# Read/write all in one block, so we don't create a corrupt file
# if the download is interrupted.
data = src.read()
dst = open(target, "wb")
dst.write(data)
finally:
if src:
src.close()
if dst:
dst.close()

download_file_insecure.viable = lambda: True

def get_best_downloader():
downloaders = [
download_file_powershell,
download_file_curl,
download_file_wget,
download_file_insecure,
]

for dl in downloaders:
if dl.viable():
return dl

def download_setuptools(version=DEFAULT_VERSION, download_base=DEFAULT_URL,
to_dir=os.curdir, delay=15,
downloader_factory=get_best_downloader):
"""Download setuptools from a specified location and return its filename

`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end
with a '/'). `to_dir` is the directory where the egg will be downloaded.
`delay` is the number of seconds to pause before an actual download
attempt.

``downloader_factory`` should be a function taking no arguments and
returning a function for downloading a URL to a target.
"""
# making sure we use the absolute path
to_dir = os.path.abspath(to_dir)
tgz_name = "setuptools-%s.tar.gz" % version
url = download_base + tgz_name
saveto = os.path.join(to_dir, tgz_name)
if not os.path.exists(saveto): # Avoid repeated downloads
log.warn("Downloading %s", url)
downloader = downloader_factory()
downloader(url, saveto)
return os.path.realpath(saveto)


def _extractall(self, path=".", members=None):
"""Extract all members from the archive to the current working
directory and set owner, modification time and permissions on
directories afterwards. `path' specifies a different directory
to extract to. `members' is optional and must be a subset of the
list returned by getmembers().
"""
import copy
import operator
from tarfile import ExtractError
directories = []

if members is None:
members = self

for tarinfo in members:
if tarinfo.isdir():
# Extract directories with a safe mode.
directories.append(tarinfo)
tarinfo = copy.copy(tarinfo)
tarinfo.mode = 448 # decimal for oct 0700
self.extract(tarinfo, path)

# Reverse sort directories.
directories.sort(key=operator.attrgetter('name'), reverse=True)

# Set correct owner, mtime and filemode on directories.
for tarinfo in directories:
dirpath = os.path.join(path, tarinfo.name)
try:
self.chown(tarinfo, dirpath)
self.utime(tarinfo, dirpath)
self.chmod(tarinfo, dirpath)
except ExtractError as e:
if self.errorlevel > 1:
raise
else:
self._dbg(1, "tarfile: %s" % e)


def _build_install_args(options):
"""
Build the arguments to 'python setup.py install' on the setuptools package
"""
return ['--user'] if options.user_install else []

def _parse_args():
"""
Parse the command line for options
"""
parser = optparse.OptionParser()
parser.add_option(
'--user', dest='user_install', action='store_true', default=False,
help='install in user site package (requires Python 2.6 or later)')
parser.add_option(
'--download-base', dest='download_base', metavar="URL",
default=DEFAULT_URL,
help='alternative URL from where to download the setuptools package')
parser.add_option(
'--insecure', dest='downloader_factory', action='store_const',
const=lambda: download_file_insecure, default=get_best_downloader,
help='Use internal, non-validating downloader'
)
options, args = parser.parse_args()
# positional arguments are ignored
return options

def main(version=DEFAULT_VERSION):
"""Install or upgrade setuptools and EasyInstall"""
options = _parse_args()
tarball = download_setuptools(download_base=options.download_base,
downloader_factory=options.downloader_factory)
return _install(tarball, _build_install_args(options))

if __name__ == '__main__':
sys.exit(main())

+ 308
- 0
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyAniSort-1.0.3-py3.4.egg-info/PKG-INFO View File

@@ -0,0 +1,308 @@
Metadata-Version: 1.1
Name: pyAniSort
Version: 1.0.3
Summary: Automatically sorts anime using information from anidb.net
Home-page: https://github.com/jotaro0010/pyanisort
Author: Jeremy Ottesen
Author-email: jotaro0010@gmail.com
License: MIT Software License
Description: pyAniSort
=========
pyAniSort is a command line utility that will sort and rename anime
video files into folders separated by the name of the series.
Usage
-----
| There are two different commands that pyAnisort has:
| The first being ``sort``
| And the second ``undo``
The sort command
~~~~~~~~~~~~~~~~
The sort command requires two arguments the from directory and the to
directory
| The verify option will check the CRC's of the files before and after
sorting to verify file integrity
| ``-v, --verify`` Will compare the crc's of the file before and after
the move
| The copy option will copy files rather than move them. Copied files
will not be reflected in the history csv file
| ``-c, --copy`` Will copy files instead of move them(history.csv will
not be updated)
| The silent option turns off any parts of the script that would ask for
user input
| ``-s, --silent`` Turn off console interactivity
| The history argument takes the name of a csv file that will store the
renaming history
| ``--history FILE`` changes where to save history file ('history.csv'
is the default)
``$ pyAniSort sort 'from/directory' 'to/directory' -s --history history.csv``
The program will sort this:
::
|-- From Folder/
| | [Sub Group A] Series Name - 01 [ABCD1234].mkv
| | [Sub Group A] Series Name - 02 [ABCD1234].mkv
| | [Sub Group A] Series Name - 03 [ABCD1234].mkv
| | [Sub Group B] Other Series Name Ep01 [ABCD1234].mkv
| | [Sub Group B] Other Series Name Ep02 [ABCD1234].mkv
| | [Sub Group B] Other Series Name Ep03 [ABCD1234].mkv
| | [Sub Group B] Other Series Name OP [ABCD1234].mkv
| | [Sub Group B] Other Series Name ED1 [ABCD1234].mkv
To This:
::
|-- To Folder/
| |-- Series Name/
| | |-- Series Name - 01 - title.mkv
| | |-- Series Name - 02 - title.mkv
| | |-- Series Name - 03 - title.mkv
| |-- Other Series Name/
| | |-- Other Series Name - 01 - title.mkv
| | |-- Other Series Name - 02 - title.mkv
| | |-- Other Series Name - 03 - title.mkv
| | |-- Other Series Name - OP01.mkv
| | |-- Other Series Name - ED01.mkv
The undo command
~~~~~~~~~~~~~~~~
The undo command will use the history.csv file to undo the sorting
operation in case there was an error.
There are two required positional arguments that are required for the
undo command
| The verify option will check the CRC's of the files before and after
sorting to verify file integrity
| ``-v, --verify`` Will compare the crc's of the file before and after
the move
| The history argument takes the name of a csv file that will store the
renaming history
| ``--history FILE`` changes where to save history file ('history.csv'
is the default)
``$ pyanisort undo startLine endLine --history history.csv``
| The first one will tell the program what line of the file to start on
and the second will tell it what line to end on.
| This allows better control of what files to undo
| Running the following command will start undoing the files stored in
history.csv from line 30 to line 40, or until the end of the file if
there are less than 40 lines.
| ``$ pyanisort undo 30 40``
| this next command will undo all of the files stored in the history.csv
file.
| ``$ pyanisort undo 0 0``
| Both of the following commands will only undo the file at line 44 of
the history.csv file
| ``$ pyanisort undo 44 44``
| ``$ pyanisort undo 44 0``
After any one of these commands are used the history.csv file will be
modified to reflect the undo operation.
Logs and other Important Files
------------------------------
| Logs and program data is stored in the following locations:
| Windows: ``%APPDATA%\pyAniSort``
| Linux: ``~/.pyanisort``
| There are two files that are automatically created when pyAniSort is
run.
| prefNames.csv and history.csv
``prefNames.csv`` - Prefered show names
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| This file is for storing information about the show. This helps save
time when gathering show information multiple times.
| There are three values stored in the csv file the **anime ID (aid)**,
**official show name**, and the **parsed name**
| The **aid** is the unique id that the anidb database uses for the show
| The **official show name** is the full series name pulled from the
anidb.
| It is also the name that will be used when renaming and sorting the
video files.
| The **parsed name** is the name that has been pulled from the filename
before it was sorted.
+--------+----------------------------------------------------------------------------+----------------------+
| aid | Official Name | Parsed Name |
+========+============================================================================+======================+
| 9541 | Shingeki no Kyojin | Shingeki no Kyojin |
+--------+----------------------------------------------------------------------------+----------------------+
| 9787 | Ore no Nounai Sentakushi ga, Gakuen Lovecome o Zenryoku de Jama Shiteiru | NouCome |
+--------+----------------------------------------------------------------------------+----------------------+
This is the contents of prefNames.csv that match the table
::
prefName.csv
9541,Shingeki no Kyojin,Shingeki no Kyojin
9787,"Ore no Nounai Sentakushi ga, Gakuen Lovecome o Zenryoku de Jama Shiteiru",NouCome
--------------
One of the useful things about putting this information in a csv file
like this is that the changes can be made to it outside of the program.
| For example:
| The official name of ``NouCome`` above is quite a mouthfull. It also
takes up a lot of space and might even make new filenames run into the
255 character limit on windows.
| So wouldn't it be better if you could change this:
| ``Ore no Nounai Sentakushi ga, Gakuen Lovecome o Zenryoku de Jama Shiteiru``
| To it's shorthand name:
| ``NouCome``
You can do this by editing the ``prefName.csv`` file. I suggest using a
standard text editor than Excel. Excel might mess up the file and cause
a problem when the program reads it.
So you would edit the ``prefName.csv`` file from:
::
prefName.csv
9541,Shingeki no Kyojin,Shingeki no Kyojin
9787,"Ore no Nounai Sentakushi ga, Gakuen Lovecome o Zenryoku de Jama Shiteiru",NouCome
To this:
::
prefName.csv
9541,Shingeki no Kyojin,Shingeki no Kyojin
9787,"NouCome",NouCome
Now when the program goes to rename your files it will use ``NouCome``
instead of
``Ore no Nounai Sentakushi ga, Gakuen Lovecome o Zenryoku de Jama Shiteiru``
``history.csv`` - File rename history
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There are two columns in the history.csv file. The first refers to the
original location of a video file and the second refers to the sorted
location
+----------------------------------------------------------------------+--------------------------------------------------------------------+
| Original Name | Sorted Name |
+======================================================================+====================================================================+
| D:\\test\_files[Sub Group A] Series Name - 01 [ABCD1234].mkv | D:\\Anime\\Series Name\\Series Name - 01 - title.mkv |
+----------------------------------------------------------------------+--------------------------------------------------------------------+
| D:\\test\_files[Sub Group B] Other Series Name Ep01 [ABCD1234].mkv | D:\\Anime\\Other Series Name\\Other Series Name - 01 - title.mkv |
+----------------------------------------------------------------------+--------------------------------------------------------------------+
This is an example ot the contents of history.csv useing real filenames
::
history.csv
D:\test_files\[EveTaku] Shingeki no Kyojin - 25 (1280x720 x264-Hi10P AAC)[783716E5].mkv,D:\Anime\Shingeki no Kyojin\Shingeki no Kyojin - 25 - The Wall Raid on Stohess District (3).mkv
D:\test_files\[Irrational Typesetting Wizardry] NouCome - 01 [F87C6CC0].mkv,"D:\Anime\Ore no Nounai Sentakushi ga, Gakuen Lovecome o Zenryoku de Jama Shiteiru\Ore no Nounai Sentakushi ga, Gakuen Lovecome o Zenryoku de Jama Shiteiru - 01 - That Choice Put My Life in Motion.mkv"
Installation
------------
| Link to PyPI page: https://pypi.python.org/pypi/pyAniSort
| There is also a windows installation binary if you don't want to
install pip.
| Make sure that you are using the python3 version of pip when
installing.
| This program only works with python3
``$ pip install pyanisort``
| If you don't have pip installed you can run these commands from the
terminal to get it
| ``$ sudo curl http://python-distribute.org/distribute_setup.py | python3``
| ``$ sudo curl https://raw.github.com/pypa/pip/master/contrib/get-pip.py | python3``
Possible Errors
---------------
There are a few possible errors that may occur when running this script
Banned
''''''
``0000-00-00 00:00:00,000 - pyanisort.findtitle - ERROR - findtitle : 62 - Error parsing through cache\0000.xml.gz: Banned``
| This means that the anidb.net server has gotten too many requests from
the machines IP address.
| It will refuse any more connections for the next couple hours.
This is a security measure put in place by the server and I have not
found any other method of getting around it other than by waiting a
couple hours to run the script again
Contact Me
----------
Questions or comments about ``pyAniSort``? send me an email at
`jotaro0010@gmail.com <mailto:jotaro0010@gmail.com>`__.
pyAniSort 1.0.3 (February 23, 2014)
-----------------------------------
- Added an option to compare CRC's of files before and after they are
sorted to verify integrity of file transfers
- An option that has the program copy files rather than move them.
copied files are not reflected in the history file
- Will now detect if the file is an opening or ending song and will
rename it accordingly
- Will save a file with traceback if an unexpected error occurs
pyAniSort 1.0.2 (February 11, 2014)
-----------------------------------
- Program files are now created and saved to ~/.pyanisort on Linux and
%APPDATA%\\pyAniSort on windows
- Short pause between downloads of series xml files. this will help
prevent temp bans - February 12, 2014
- Fixed bugs with program data creation on Linux - February 12, 2014
pyAniSort 1.0.1 (February 09, 2014)
-----------------------------------
- Restructured program so that it could be downloaded and installed
through the Python Package Index
- Created setup.py and init.py
- Program now changes working directory to program location to use data
files stored there
pyAniSort 1.0.0 (February 06, 2014)
-----------------------------------
- Initial upload
Platform: any
Classifier: Development Status :: 3 - Alpha
Classifier: Programming Language :: Python
Classifier: License :: OSI Approved :: MIT License
Classifier: Environment :: Console
Classifier: Intended Audience :: End Users/Desktop

+ 19
- 0
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyAniSort-1.0.3-py3.4.egg-info/SOURCES.txt View File

@@ -0,0 +1,19 @@
CHANGELOG.md
LICENSE
MANIFEST.in
README
README.md
ez_setup.py
setup.py
pyAniSort.egg-info/PKG-INFO
pyAniSort.egg-info/SOURCES.txt
pyAniSort.egg-info/dependency_links.txt
pyAniSort.egg-info/entry_points.txt
pyAniSort.egg-info/top_level.txt
pyanisort/__init__.py
pyanisort/findtitle.py
pyanisort/pyanisort.py
pyanisort/seriesmatch.py
pyanisort/utilities.py
pyanisort/conf/logger.conf
pyanisort/logs/pyAniSort.log

+ 1
- 0
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyAniSort-1.0.3-py3.4.egg-info/dependency_links.txt View File

@@ -0,0 +1 @@


+ 3
- 0
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyAniSort-1.0.3-py3.4.egg-info/entry_points.txt View File

@@ -0,0 +1,3 @@
[console_scripts]
pyanisort = pyanisort.pyanisort:main


+ 2
- 0
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyAniSort-1.0.3-py3.4.egg-info/top_level.txt View File

@@ -0,0 +1,2 @@
ez_setup
pyanisort

+ 9
- 0
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__init__.py View File

@@ -0,0 +1,9 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-

"""pyanisort - Automatically sorts anime using information from anidb.net
Gathers information from anidb.net using the HTTP API to rename files from something like "[subber] Show - 00.mkv" to "Show - 00 - Episode Title.mkv"

"""

__version__ = '1.0.3'

BIN
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/__init__.cpython-34.pyc View File


BIN
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/__init__.cpython-34.pyo View File


BIN
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/findtitle.cpython-34.pyc View File


BIN
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/findtitle.cpython-34.pyo View File


BIN
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/pyanisort.cpython-34.pyc View File


BIN
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/pyanisort.cpython-34.pyo View File


BIN
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/seriesmatch.cpython-34.pyc View File


BIN
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/seriesmatch.cpython-34.pyo View File


BIN
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/utilities.cpython-34.pyc View File


BIN
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/__pycache__/utilities.cpython-34.pyo View File


+ 31
- 0
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/conf/logger.conf View File

@@ -0,0 +1,31 @@
[loggers]
keys=root


[logger_root]
handlers=screen,file
level=NOTSET

[formatters]
keys=simple,complex

[formatter_simple]
format=%(levelname)s - %(message)s

[formatter_complex]
format=%(asctime)s - %(name)s - %(levelname)s - %(module)s : %(lineno)d - %(message)s

[handlers]
keys=file,screen

[handler_file]
class=handlers.RotatingFileHandler
formatter=complex
level=DEBUG
args=('logs/pyAniSort.log',1024,3)

[handler_screen]
class=StreamHandler
formatter=simple
level=INFO
args=(sys.stdout,)

+ 194
- 0
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/findtitle.py View File

@@ -0,0 +1,194 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-

try:
import pyanisort.utilities as utilities
except ImportError:
import utilities
import os
import re
import xml.etree.ElementTree as ET
import logging
from time import sleep

logger = logging.getLogger(__name__)
aid=''
version=''

#download series xml file from anidb server
def downloadSeriesXML(xmlFileName, aid, version):
logger.info("Downloading information for {1}".format(xmlFileName, aid))
# wait to prevent ban on anidb server
sleep(3)
url = 'http://api.anidb.net:9001/httpapi?request=anime&client=pyanisort&'
url += 'clientver=' + str(version)
url += '&protover=1&aid=' + str(aid)
utilities.downloadFile(url, xmlFileName)

#parse through series xml file and make a list of all titles
#with corresponding episode number
def parseSeriesXML(seriesXMLFilename):
# open xml file download it if it doesn't exist
try:
xmlFile = utilities.openFile(seriesXMLFilename)
except IOError as e:
logger.warning("There is no local information for series: {0}".format(aid))
downloadSeriesXML(seriesXMLFilename, aid, version)
xmlFile = utilities.openFile(seriesXMLFilename)
tree = ET.parse(xmlFile)
root = tree.getroot()
episodes = root.find('episodes')

#if the episodes tag is none check for error
if episodes is None:
if root.tag == 'error':
if utilities.checkFileAge(seriesXMLFilename, 0.20):
logger.debug("File '{0}' is several hours old re-downloading".format(seriesXMLFilename))
downloadSeriesXML(seriesXMLFilename, aid, version)
try:
xmlFile = utilities.openFile(seriesXMLFilename)
except IOError as e:
logger.error("IOError[{0}] in file {1}: {2}".format(e.errno, seriesXMLFilename, e.strerror))
return
tree = ET.parse(xmlFile)
root = tree.getroot()
episodes = root.find('episodes')
if episodes is None:
if root.tag == 'error':
logger.error("Error parsing through {0}: {1}".format(seriesXMLFilename, root.text))
return
else:
logger.error('unknown error occured parsing {0}'.format(seriesXMLFilename))
return
else:
logger.error("Error parsing through {0}: {1}".format(seriesXMLFilename, root.text))
return
else:
logger.error('unknown error occured parsing {0}'.format(seriesXMLFilename))
return

allEpInfo = []
for episode in episodes.iterfind('episode'):
epInfo=[]
episodeNo = episode.find('epno')
if (episodeNo.get('type') == '1'):
title = episodeNo.text
title = int(title)
epInfo.append('{0:0=2d}'.format(title))
for title in episode.findall('title'):
langAttrib = '{http://www.w3.org/XML/1998/namespace}lang'
if title.get(langAttrib) == 'en':
t = title.text
# replace any |/\ with a comma
t = re.sub('\s?[\\/|]', ',', t)
epInfo.append(t)
allEpInfo.append(epInfo)
return allEpInfo

#generate list of new file names for anime in the form 'series - 00 - title'
#return a list of file names and new names to be renamed later.
#TODO create a way to customize new name formating
def generateFilenamesSeries(xmlFilename, outDir,
seriesName, filenames, titleList):
# sort descending for faster matching later
filenames = sorted(filenames, reverse=True)
titleList = sorted(titleList, reverse=True)

# if the highest file episode is > the highest episode from the xmlFile
if (filenames[0][0] > titleList[0][0]):
if utilities.checkFileAge(xmlFilename, 0.5):
downloadSeriesXML(xmlFilename, aid, version)
logger.info("Downloading newer file '{0}' for show {1}".format(xmlFilename, seriesName))
try:
xmlFile = utilities.openFile(xmlFilename)
except IOError as e:
logger.error("IOError[{0}] in file {1}: {2}".format(e.errno, filename, e.strerror))
titleList = parseSeriesXML(xmlFilename)
titleList = sorted(titleList, reverse=True)
else:
logger.error('{0} xml file is up to date'.format(seriesName))

#start generating new names
newNames = []
for fileEp, file in filenames:
filename, ext = os.path.splitext(file)
path, filename = os.path.split(file)

# checks if file is and ending or opening
if fileEp[0] == 'E' or fileEp[0] == 'e':
m = re.search('\d{1,2}',fileEp)
try:
ep = m.group(0)
ep = '{0:0=2d}'.format(int(ep))
except AttributeError:
ep = '01'
newFilename = '{0} - ED{1}{2}'.format(
seriesName, ep, ext)
newFilename = os.path.join(outDir, seriesName, newFilename)
newNames.append([file, newFilename])
continue
elif fileEp[0] == 'O' or fileEp[0] == 'o':
m = re.search('\d{1,2}',fileEp)
try:
ep = m.group(0)
ep = '{0:0=2d}'.format(int(ep))
except AttributeError:
ep = '01'
newFilename = '{0} - OP{1}{2}'.format(
seriesName, ep, ext)
newFilename = os.path.join(outDir, seriesName, newFilename)
newNames.append([file, newFilename])
continue
# If it isn't then it is an episode
for ep, title in titleList:
if fileEp == ep:
newFilename = '{0} - {1} - {2}{3}'.format(
seriesName, ep, title, ext)
newFilename = os.path.join(outDir, seriesName, newFilename)
newNames.append([file, newFilename])
break
elif (fileEp > ep):
title = 'Episode {0}'.format(fileEp)
newFilename = '{0} - {1} - {2}{3}'.format(
seriesName, fileEp, title, ext)
newFilename = os.path.join(outDir, seriesName, newFilename)
newNames.append([file, newFilename])
return newNames

#generate a list of new names for all files based off database info
def generateFilenames(ver, allShows, outDir, cacheDir):
global version
global aid
version = ver

outDir = os.path.abspath(outDir)
allNewFilenames = []
for series in allShows:
aid = series[0]
seriesName = series[1]
filenames = series[2:]
xmlFilename = os.path.join(cacheDir, '{0}.xml.gz'.format(aid))

logger.debug("Now processing information for show {0}, {1}".format(aid, seriesName))

#get list of episodes and titles
titleList = parseSeriesXML(xmlFilename)
if titleList is None:
logger.error('an error has occured while processing information for series {0}'.format(seriesName))
continue

#use all previous info ro generate list of new names
newFilenames = generateFilenamesSeries(xmlFilename, outDir, seriesName,
filenames, titleList)
# validate file names
if len(newFilenames) != 0:
for name in newFilenames:
name[1] = utilities.validateFilename(name[1])
allNewFilenames.append(name)

return allNewFilenames



+ 0
- 0
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/logs/pyAniSort.log View File


+ 144
- 0
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/pyanisort.py View File

@@ -0,0 +1,144 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-

try:
import pyanisort.utilities as utilities
import pyanisort.seriesmatch as seriesmatch
import pyanisort.findtitle as findtitle
from pyanisort.__init__ import __version__
except ImportError:
import utilities
import seriesmatch
import findtitle
from __init__ import __version__
import logging
import logging.config
import argparse
import os
import sys
import shutil

def findConfDir():
if sys.platform == 'win32':
return os.path.join(os.getenv('appdata'), 'pyAniSort')
elif sys.platform == 'linux2':
return os.path.join(os.getenv('HOME'), '.pyanisort')
def makeConfig(confPath):
os.mkdir(confPath)
os.chdir(os.path.dirname(os.path.abspath(__file__)))
shutil.copytree('logs', os.path.join(confPath, 'logs'))
shutil.copytree('conf', os.path.join(confPath, 'conf'))
def main():
parser = argparse.ArgumentParser(description='Will automatically sort and rename anime files in a folder based off information gathered from anidb.net')
subparsers = parser.add_subparsers(help='subcommand help', dest='command')
sortParser = subparsers.add_parser('sort', help='Will sort anime based on anidb info')
sortParser.add_argument("fromDir", help="The directory with the files you want to sort")
sortParser.add_argument("toDir", help="The directory where files will go once sorted")
sortParser.add_argument("-v", "--verify", action="store_true",
help="Will compare the crc's of the file before and after the move")
sortParser.add_argument("-c", "--copy", action="store_true",
help="Will copy files instead of move them(history.csv will not be updated)")
sortParser.add_argument("-s", "--silent", action="store_true",
help="Turn off output to console (will still log to file)")
sortParser.add_argument("--history", help="history csv file containing the original path then the current path")

undoParser = subparsers.add_parser('undo', help='undo file sorting based of of a history csv file')
undoParser.add_argument("startLine", type=int, help="the line of the csv file to start rename undo (enter 0 0 to undo entire file)")
undoParser.add_argument("endLine", type=int, help="the line of the csv file to end rename undo (enter 0 to only undo the line of startLine)")
undoParser.add_argument("-v", "--verify", action="store_true",
help="Will compare the crc's of the file before and after the move")
undoParser.add_argument("--history", help="history csv file containing the original path then the current path")
args = parser.parse_args()
if args.command is None:
parser.print_help()
return
if args.command == 'undo':
if args.history is not None:
history=os.path.abspath(args.history)
verify = args.verify
confDir = findConfDir()
try:
os.chdir(confDir)
except (IOError, OSError):
makeConfig(confDir)
os.chdir(confDir)

logging.config.fileConfig('conf/logger.conf', disable_existing_loggers=False)
logger = logging.getLogger('root')
logger.info('start moving file back to their original locations')
try:
if args.history is None:
utilities.undoRename(args.startLine, args.endLine, verify=verify)
else:
utilities.undoRename(args.startLine, args.endLine, verify=verify, filename=history)
except ValueError:
logger.error("Could not undo specified line please check history csv file and ensure that line isn't blank")
return 1
logger.info('files have finished moving back to their original locations')
elif args.command == 'sort':

#ensure that program has full path for to and from directories before program cd's to its current location
fromDir = os.path.abspath(args.fromDir)
toDir = os.path.abspath(args.toDir)
silent = args.silent
verify = args.verify
copy = args.copy
if args.history is not None:
history=os.path.abspath(args.history)
cacheDir = 'cache'
confDir = findConfDir()
try:
os.chdir(confDir)
except (IOError, OSError):
makeConfig(confDir)
os.chdir(confDir)
logging.config.fileConfig('conf/logger.conf', disable_existing_loggers=False)
logger = logging.getLogger('root')

#need first character of version to use when downloading files using anidb HTTP api
version = __version__[0]

logger.info("Starting to group files")
try:
allShows = seriesmatch.groupAnimeFiles(fromDir, silentMode=silent)
except IOError as e:
logger.error("Program exited with an error")
return 1
logger.info("Finished grouping files")

logger.info("Starting to generate filenames")
allNewFilenames = findtitle.generateFilenames(int(version), allShows, toDir, cacheDir)
logger.info("Finished generating filenames")

logger.info("Starting to rename files")
if args.history is None:
utilities.renameFiles(allNewFilenames, verify=verify, copy=copy, storeHistory=True)
else:
utilities.renameFiles(allNewFilenames, history, storeHistory=True)
logger.info("Files have been renamed")
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
pass
except:
confDir = findConfDir()
logger = logging.getLogger("crash")
formating = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
logger.setLevel(logging.ERROR)
handler = logging.FileHandler(os.path.join(confDir, 'logs', 'crash.log'))
handler.setFormatter(formating)
logger.addHandler(handler)
logger.exception("Program has crashed")

+ 243
- 0
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/seriesmatch.py View File

@@ -0,0 +1,243 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
try:
import pyanisort.utilities as utilities
except ImportError:
import utilities
import xml.etree.ElementTree as ET
import re
import difflib
import csv
import sys
import os
import io
import copy
import errno
import logging

logger = logging.getLogger(__name__)

# Set regex to a regex string or an array of regex strings to loop through
# when creating new regex group one must be show name group two must be episode number
#(!@#$%^&*()\?<>;:'"{}[]|~`+=) all of these are valid characters on linux systems included for completion
# returns [show, ep]
def parseFilename(filename ,regex=None):
path, file = os.path.split(filename)
if (regex is None):
regex = [
"(?i)(?:[^]]*\][ _.]?)((?:(?!-?[ _.](?:En?D(?:ing)?|OP(?:ening)?|E?P?(?:isode[ _.]?)?\d{2,3}))[!@#$%^&*()\\?<>;:\'\"{}\[\]|~`+=\w\s._-])+)(?:(?!\d|En?D(?:ing)?|OP(?:ening)?|\[[\dA-F]{8}\]).)*(\d{2,3})", # Matches Show and episode (Requires Sub Group for Accuracy)
"(?i)(?:[^]]*\][ _.]?)((?:(?!-?[ _.](?:En?D(?:ing)?|OP(?:ening)?|E?P?(?:isode[ _.]?)?\d{2,3}))[!@#$%^&*()\\?<>;:\'\"{}\[\]|~`+=\w\s._-])+)(?:(?!En?D(?:ing)?|OP(?:ening)?|\[[\dA-F]{8}\]).)*(OP(?:ening)?[ _.]?(?:\d{1,2})?|En?D(?:ing)?[ _.]?(?:\d{1,2})?)", # Matches Opening and Endings (Requires Sub Group for Accuracy)
"(?i)((?:(?!-?[ _.](?:En?D(?:ing)?|OP(?:ening)?|E?P?(?:isode[ _.]?)?\d{2,3}))[!@#$%^&*()\\?<>;:\'\"{}\[\]|~`+=\w\s._-])+)(?:(?!\d|En?D(?:ing)?|OP(?:ening)?|\[[\dA-F]{8}\]).)*(\d{2,3})",# Matches Show and episode (Doesn't require sub group)
"(?i)(?:[^]]*\][ _.]?)((?:(?!-?[ _.](?:En?D(?:ing)?|OP(?:ening)?|E?P?(?:isode[ _.]?)?\d{2,3}))[!@#$%^&*()\\?<>;:\'\"{}\[\]|~`+=\w\s._-])+)(?:(?!En?D(?:ing)?|OP(?:ening)?|\[[\dA-F]{8}\]).)*(OP(?:ening)?[ _.]?(?:\d{1,2})?|En?D(?:ing)?[ _.]?(?:\d{1,2})?)" # Matches Opening and Endings (Doesn't require sub group)
]
else:
if type(regex) is list:
regex = regex
elif type(regex) is str:
regex = [regex]
else:
return
index = 0
while index < len(regex):
reg = regex[index]
m = re.match(reg, file)
try:
show = m.group(1)
ep = m.group(2)
break
except AttributeError as e:
logger.debug("Regex {0}: Could not find match in file '{1}'".format(index, file))
if index < len(regex):
index += 1
else:
logger.debug("Could not find match in file '{1}'".format(reg, file))
return

show = re.sub('[_.]', ' ', show)
show = show.rstrip() # remove trailing spaces
logger.debug("Regex {0}: Found match in file '{1}': ({2}, {3})".format(index, file, show, ep))
return [show, ep]

# set precision to a float between 0 and 1
# root is the root tag of the xmlfile
# the closer to 0 the less precise the match
# returns [[aid, title], [aid, title] ... ]
def findShowMatches(findMatch, root, precision=.9):
allMatches = []

# search throuh anime subtags for a matching title
# make an array of all the names in the anime
# then match use the difflib library to find the closest match
for anime in root.findall('anime'):
titleList = [anime.get('aid'), anime ]
# store iter for current anime
for title in anime.iterfind('title'):
titleList.append(title.text)
match = difflib.get_close_matches(findMatch, titleList,
cutoff=precision)
if match:
# use iter from before to find main title
for title in titleList[1].iterfind('title'):
if (title.get('type') == 'main'):
allMatches.append([titleList[0], title.text])
return allMatches

# command line method to choose one value from a list
def listChoice(matchList):
while True:
for i in range(len(matchList)):
print ('{0}: {1}'.format(i+1, matchList[i]))
try:
choice = input('Please select the correct title: ')
choice = int(choice)
if choice > 0 and choice <= len(matchList):
break
else:
print ('Choice is not valid', file=sys.stderr)
except ValueError:
print ('Error please enter a number', file=sys.stderr)
choice -= 1
return matchList[choice]

# checks the csv file for a prefered title
# returns [aid, prefName, foundName]
def findPrefName(filename, findMatch):
try:
with open(filename) as prefTReader:
prefTCSVReader = csv.reader(prefTReader)
for line in prefTCSVReader:
if line[2] == findMatch:
return line
except IOError as e:
logger.error("IOError[{0}] in file {1}: {2}".format(e.errno, filename, e.strerror))
raise e

# saves list of prefered names to a csv file without making duplicate entries
def saveprefNames(filename, prefNameList):
prefNameListCopy = copy.deepcopy(prefNameList)
try:
with open(filename, 'a', newline='') as prefTWriter, \
open(filename, newline='') as prefTReader:
prefTCSVReader = csv.reader(prefTReader)
for line in prefTCSVReader:
i = 0
while i < len(prefNameListCopy):
if line[2] == prefNameListCopy[i][2]:
del prefNameListCopy[i]
break
i +=1
prefTCSVWriter = csv.writer(prefTWriter)
prefTCSVWriter.writerows(prefNameListCopy)
except IOError as e:
logger.error("IOError[{0}] in file {1}: {2}".format(e.errno, filename, e.strerror))

# parses xml file for title names downloads new one if necessary
# silent skips over anything that requires user input
def generatePrefNameCSV(xmlFilename, ShowList):
try:
xmlFileObject = utilities.openFile(xmlFilename)
except IOError as e:
url = 'http://anidb.net/api/animetitles.xml.gz'
utilities.downloadFile(url, xmlFilename)
xmlFileObject = utilities.openFile(xmlFilename)
logger.info("'{0}' not found: downloading".format(xmlFilename))

#open xmlfile once here for faster parsing
tree = ET.parse(xmlFileObject)
root = tree.getroot()
preferedNames=[]
for show in ShowList:
showMatches = findShowMatches(show, root)
if (len(showMatches) == 0):
# check date of latest animetitles.xml.gz
# get new one if it was not already downloaded today
if utilities.checkFileAge(xmlFilename):
url = 'http://anidb.net/api/animetitles.xml.gz'
utilities.downloadFile(url, xmlFilename)
# search through xml file again
xmlFileObject = utilities.openFile(xmlFilename)
tree = ET.parse(xmlFileObject)
root = tree.getroot()
showMatches = findShowMatches(show, root)
#get user to pick show from list or write a warning and do nothing if silent is on
if (len(showMatches) > 1):
if not silent:
print('{0} matches for show {1}'.format(
len(showMatches), show))
showChoice = listChoice(showMatches)
showChoice.append(show)
preferedNames.append(showChoice)
showMatches = [showChoice]
logger.debug("{0} Matched: '{1}' with {2}".format(xmlFilename, show, showChoice))
else:
logger.warning("Multiple matches found for title '{0}' silent mode is on".format(title))
showMatches = []
elif (len(showMatches) == 1):
showMatches[0].append(show)
preferedNames.append(showMatches[0])
logger.debug("{0} Matched: '{1}' with {2}".format(xmlFilename, show, showMatches[0]))
else:
logger.error("Anime '{0}' not found".format(show))

return preferedNames

# Each list contains the aid, preffered title, and all files in the series
def groupAnimeFiles(vidFilesLoc, xmlFilename='animetitles.xml.gz',
csvFile='prefName.csv', silentMode=False):
global silent
silent = silentMode

try:
vidFiles = utilities.listAllfiles(vidFilesLoc)
except IOError as e:
logger.error("IOError[{0}] in file {1}: {2}".format(e.errno, vidFilesLoc, e.strerror))
raise e

# make a list of all the different tv shows names
showNames = [] # list of show titles with duplicates
fileInfo = [] # list of info pulled from file name
for file in vidFiles:
showName = parseFilename(file)
if showName is None:
continue

showName.append(file)
fileInfo.append(showName)
showNames.append(showName[0])

shows = set(showNames)# list of show titles without duplicates
shows = list(shows)# convert back to list
preferedNames=[]
allShowsAndFiles = []

index = 0
# check csv file for shows
while index < len(shows):
# store as a list of lists for consistency with pulled matches
try:
showMatch = findPrefName(csvFile, shows[index])
except IOError as e:
break
if (showMatch is not None):
preferedNames.append(showMatch)
logger.debug("{0} Matched: '{1}' with {2}".format(csvFile, shows[index], showMatch))
del shows[index]
else:
index += 1

# check xml file for shows
preferedNames += generatePrefNameCSV(xmlFilename, shows)

saveprefNames(csvFile, preferedNames)

# group all files with show info
for aid, prefName, originalName in preferedNames:
animeFiles = [aid, prefName]
for file in fileInfo:
if file[0] == originalName:
animeFiles.append(file[1:])
allShowsAndFiles.append(animeFiles)
return allShowsAndFiles



+ 233
- 0
pyanisort-git/pkg/pyanisort-git/usr/lib/python3.4/site-packages/pyanisort/utilities.py View File

@@ -0,0 +1,233 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-

import urllib.request
import urllib.error
import shutil
import os
import sys
import csv
import gzip
import io
import string
import unicodedata
import re
import errno
import zlib
from datetime import datetime, timedelta
import logging

logger = logging.getLogger(__name__)

#TODO add error handling and logging
def listAllfiles(startDir):
startDir = os.path.abspath(startDir)
files = []
for dirname, dirnames, filenames in os.walk(startDir):
# print path to all filenames.
for filename in filenames:
files.append(os.path.join(dirname, filename))
return files

#returns string file object
def openFile(filename):
try:
binaryFile = gzip.open(filename).read()
stringFile = io.StringIO(binaryFile.decode('utf8'))
except IOError as e:
if str(e) == 'Not a gzipped file':
logger.warning('\'{0}\' is not a gzipped file'.format(filename))
try:
stringFile = open(filename)
except IOError as e:
raise e
else:
raise e
return stringFile

def downloadFile(url, filename):
# only supports relative paths
path, file = os.path.split(filename)
if path != '':
# make any directories that where not there before
try:
os.makedirs(path)
except (IOError, OSError) as e:
if (errno.errorcode[e.errno] == 'EEXIST'):
pass
else:
logger.error("Error[{0}] in directory {1}: {2}".format(e.errno, path, e.strerror))
try:
with urllib.request.urlopen(url) as response,\
open(filename, 'wb') as outFile:
shutil.copyfileobj(response, outFile)
except IOError as e:
logger.error("IOError[{0}] in file {1}: {2}".format(e.errno, filename, e.strerror))
except (ValueError, urllib.error.HTTPError) as e:
logger.error("Error downloading file {0} from '{1}': {2}".format(filename, url, e.strerror))



def checkFileAge(filename, daysOld=1):
checkTime = datetime.now() - timedelta(days=daysOld)
filetime = datetime.fromtimestamp(os.path.getmtime(filename))
if filetime < checkTime:
return True
else:
return False

def validateFilename(filename):
# removes any invalid characters \/'":?"<>|
drive, filePath = os.path.splitdrive(filename)
folder, file = os.path.split(filePath)
folder = re.sub('[\':?*"<>|]', '', folder) # doesn't remove \/ since it is a path
file = re.sub('[\'\\/:?*"<>|]', '', file)
return os.path.join(drive, folder, file)

def getCRC(filename, message='Calculating CRC'):
buffer = 65536
crc = 0
data = 0
totalSize = os.path.getsize(filename)
index = 0
if buffer > totalSize:
buffer = totalSize
# prevents a divde by zero
if totalSize == 0:
totalSize = 1
with open(filename, 'rb') as f:
while data != b'':
data = f.read(buffer)
index += 1
sys.stdout.write('{0}: {1:.0%}\r'.format(message, (buffer*index)/totalSize))
sys.stdout.flush()
crc = zlib.crc32(data, crc)
sys.stdout.write('{0}: {1:.0%}: {2:8X}'.format(message, (buffer*index)/totalSize, crc))
sys.stdout.write('\n')
sys.stdout.flush()
return '{0:8X}'.format(crc)

def undoRename(lineStart, lineStop, verify=False, filename='history.csv'):
# decrement value by one to use in array
lineStart -= 1
lineStop -= 1
# get all rows store them in an array use array to rename certain amount of files
try:
fileArr=[] # for the actually renaming the files
lineArr=[] # for storing the files that where not renamed back into the file
with open(filename, newline='') as histReader:
histCSVReader = csv.reader(histReader)
i=0
for line in histCSVReader:
try:
originalDir, newDir = line
lineArr.append(line)
fileArr.append([newDir, originalDir])

except ValueError as e:
#keep line numbers relevent since if blank lines exist in the middle of file
if (i == lineStart and lineStart == lineStop) or (i == lineStart and lineStop == -1):
logger.error("Line {0} is invalid. might be a blank line".format(lineStart + 1))
raise e
if i < lineStart:
lineStart -= 1
lineStop -= 1
elif i > lineStart and i < lineStop:
lineStop -=1
i += 1

# if both numbers are the same copy one line
# do the same if the second number is 0

#rename all files specified in history file
if lineStart == lineStop and lineStart == -1:
renameFiles(fileArr, verify=verify)
elif lineStart == lineStop or lineStop == -1:
renameFiles([fileArr[lineStart]], verify=verify)
else:
# add one to lineStop or that line would not have been included
renameFiles(fileArr[lineStart:(lineStop+1)], verify=verify)
except IOError as e:
logger.error("IOError[{0}] in file {1}: {2}".format(e.errno, filename, e.strerror))
return
try:
# save history file with whatever files where not just moved
with open(filename, 'w', newline='') as histWriter:
histCSVWriter = csv.writer(histWriter)
#inverse of what was renamed earlier
if lineStart == lineStop and lineStart == -1:
# since entire file was run remove all lines from history
histCSVWriter.writerows([])
elif lineStart == lineStop or lineStop == -1:
# removing only one line
histCSVWriter.writerows(lineArr[:lineStart] + lineArr[(lineStart+1):])
else:
histCSVWriter.writerows(lineArr[:lineStart] + lineArr[(lineStop+1):])

except IOError as e:
logger.error("IOError[{0}] in file {1}: {2}".format(e.errno, filename, e.strerror))

def renameFiles(filenameList, verify=False, copy=False, histFile='history.csv', storeHistory=False):
# open csv writer for creating rename history file
if storeHistory:
try:
histWriter = open(histFile, 'a', newline='')
histCSVWriter = csv.writer(histWriter)
except IOError as e:
logger.error("IOError[{0}] in file {1}: {2}".format(e.errno, histFile, e.strerror))

for oldName, newName in filenameList:
newPath, newFile = os.path.split(newName)
oldPath, oldFile = os.path.split(oldName)

logger.info("'{0} --moving to-> '{1}'".format(oldName, newName))
# make any directories that where not there before
try:
os.makedirs(newPath)
except (IOError, OSError) as e:
if (errno.errorcode[e.errno] == 'EEXIST'):
pass
else:
logger.error("Error[{0}] in directory {1}: {2}".format(e.errno, newPath, e.strerror))
if verify:
logger.debug("Calculating CRC before sorting for '{0}'".format(oldFile))
beforeCRC = getCRC(oldName,'Calculating CRC before moving')
logger.debug("'{0}': {1}".format(oldFile, beforeCRC))
try:
#move or copy the file
if copy:
sys.stdout.write('Copying File...\r')
shutil.copy(oldName, newName)
logger.debug("'{0} --copied to-> '{1}'".format(oldName, newName))
else:
sys.stdout.write('Moving File...\r')
shutil.move(oldName, newName)
if storeHistory: histCSVWriter.writerow([oldName, newName])
logger.debug("'{0} --moved to-> '{1}'".format(oldName, newName))
except IOError as e:
if (errno.errorcode[e.errno] == 'ENOENT'):
if len(newName) >= 255:
logger.error('New name too long must be less than 255 chars was {0} chars: {1}'.format(len(newName), newName))
continue
else:
logger.error("IOError[{0}] in file {1}: {2}".format(e.errno, oldName, e.strerror))
continue
if verify:
logger.debug("Calculating CRC after move for '{0}'".format(newFile))
afterCRC = getCRC(newName, 'Calculating CRC after move')
logger.debug("'{0}': {1}".format(newFile, afterCRC))
if beforeCRC == afterCRC:
logger.info('{0} has been sorted succesfully to {1}'.format(oldFile, newFile))
logger.debug('{0} is equal to {1}'.format(beforeCRC, afterCRC))
else:
logger.warning('{0} has not been sorted succesfully to {1}'.format(oldFile, newFile))
logger.warning('{0} is not equal to {1}'.format(beforeCRC, afterCRC))
if storeHistory: histWriter.close()



+ 20
- 0
pyanisort-git/pkg/pyanisort-git/usr/share/licenses/pyanisort-git/LICENSE View File

@@ -0,0 +1,20 @@
The MIT License (MIT)

Copyright (c) 2014 jotaro0010

Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

BIN
pyanisort-git/pyanisort-git-r42.057e52b-1-any.pkg.tar.xz View File


BIN
pyanisort-git/pyanisort-git-r42.057e52b-1.src.tar.gz View File


+ 1
- 0
pyanisort-git/pyanisort/HEAD View File

@@ -0,0 +1 @@
ref: refs/heads/master

+ 8
- 0
pyanisort-git/pyanisort/config View File

@@ -0,0 +1,8 @@
[core]
repositoryformatversion = 0
filemode = true
bare = true
[remote "origin"]
url = https://github.com/jotaro0010/pyanisort.git
fetch = +refs/*:refs/*
mirror = true

+ 1
- 0
pyanisort-git/pyanisort/description View File

@@ -0,0 +1 @@
Unnamed repository; edit this file 'description' to name the repository.

+ 15
- 0
pyanisort-git/pyanisort/hooks/applypatch-msg.sample View File

@@ -0,0 +1,15 @@
#!/bin/sh
#
# An example hook script to check the commit log message taken by
# applypatch from an e-mail message.
#
# The hook should exit with non-zero status after issuing an
# appropriate message if it wants to stop the commit. The hook is
# allowed to edit the commit message file.
#
# To enable this hook, rename this file to "applypatch-msg".

. git-sh-setup
test -x "$GIT_DIR/hooks/commit-msg" &&
exec "$GIT_DIR/hooks/commit-msg" ${1+"$@"}
:

+ 24
- 0
pyanisort-git/pyanisort/hooks/commit-msg.sample View File

@@ -0,0 +1,24 @@
#!/bin/sh
#
# An example hook script to check the commit log message.
# Called by "git commit" with one argument, the name of the file
# that has the commit message. The hook should exit with non-zero
# status after issuing an appropriate message if it wants to stop the
# commit. The hook is allowed to edit the commit message file.
#
# To enable this hook, rename this file to "commit-msg".

# Uncomment the below to add a Signed-off-by line to the message.
# Doing this in a hook is a bad idea in general, but the prepare-commit-msg
# hook is more suited to it.
#
# SOB=$(git var GIT_AUTHOR_IDENT | sed -n 's/^\(.*>\).*$/Signed-off-by: \1/p')
# grep -qs "^$SOB" "$1" || echo "$SOB" >> "$1"

# This example catches duplicate Signed-off-by lines.

test "" = "$(grep '^Signed-off-by: ' "$1" |
sort | uniq -c | sed -e '/^[ ]*1[ ]/d')" || {
echo >&2 Duplicate Signed-off-by lines.
exit 1
}

+ 8
- 0
pyanisort-git/pyanisort/hooks/post-update.sample View File

@@ -0,0 +1,8 @@
#!/bin/sh
#
# An example hook script to prepare a packed repository for use over
# dumb transports.
#
# To enable this hook, rename this file to "post-update".

exec git update-server-info

+ 14
- 0
pyanisort-git/pyanisort/hooks/pre-applypatch.sample View File

@@ -0,0 +1,14 @@
#!/bin/sh
#
# An example hook script to verify what is about to be committed
# by applypatch from an e-mail message.
#
# The hook should exit with non-zero status after issuing an
# appropriate message if it wants to stop the commit.
#
# To enable this hook, rename this file to "pre-applypatch".

. git-sh-setup
test -x "$GIT_DIR/hooks/pre-commit" &&
exec "$GIT_DIR/hooks/pre-commit" ${1+"$@"}
:

+ 49
- 0
pyanisort-git/pyanisort/hooks/pre-commit.sample View File

@@ -0,0 +1,49 @@
#!/bin/sh
#
# An example hook script to verify what is about to be committed.
# Called by "git commit" with no arguments. The hook should
# exit with non-zero status after issuing an appropriate message if
# it wants to stop the commit.
#
# To enable this hook, rename this file to "pre-commit".

if git rev-parse --verify HEAD >/dev/null 2>&1
then
against=HEAD
else
# Initial commit: diff against an empty tree object
against=4b825dc642cb6eb9a060e54bf8d69288fbee4904
fi

# If you want to allow non-ASCII filenames set this variable to true.
allownonascii=$(git config --bool hooks.allownonascii)

# Redirect output to stderr.
exec 1>&2

# Cross platform projects tend to avoid non-ASCII filenames; prevent
# them from being added to the repository. We exploit the fact that the
# printable range starts at the space character and ends with tilde.
if [ "$allownonascii" != "true" ] &&
# Note that the use of brackets around a tr range is ok here, (it's
# even required, for portability to Solaris 10's /usr/bin/tr), since
# the square bracket bytes happen to fall in the designated range.
test $(git diff --cached --name-only --diff-filter=A -z $against |
LC_ALL=C tr -d '[ -~]\0' | wc -c) != 0
then
cat <<\EOF
Error: Attempt to add a non-ASCII file name.

This can cause problems if you want to work with people on other platforms.

To be portable it is advisable to rename the file.

If you know what you are doing you can disable this check using:

git config hooks.allownonascii true
EOF
exit 1
fi

# If there are whitespace errors, print the offending file names and fail.
exec git diff-index --check --cached $against --

+ 53
- 0
pyanisort-git/pyanisort/hooks/pre-push.sample View File

@@ -0,0 +1,53 @@
#!/bin/sh

# An example hook script to verify what is about to be pushed. Called by "git
# push" after it has checked the remote status, but before anything has been
# pushed. If this script exits with a non-zero status nothing will be pushed.
#
# This hook is called with the following parameters:
#
# $1 -- Name of the remote to which the push is being done
# $2 -- URL to which the push is being done
#
# If pushing without using a named remote those arguments will be equal.
#
# Information about the commits which are being pushed is supplied as lines to
# the standard input in the form:
#
# <local ref> <local sha1> <remote ref> <remote sha1>
#
# This sample shows how to prevent push of commits where the log message starts
# with "WIP" (work in progress).

remote="$1"
url="$2"

z40=0000000000000000000000000000000000000000

while read local_ref local_sha remote_ref remote_sha
do
if [ "$local_sha" = $z40 ]
then
# Handle delete
:
else
if [ "$remote_sha" = $z40 ]
then
# New branch, examine all commits
range="$local_sha"
else
# Update to existing branch, examine new commits
range="$remote_sha..$local_sha"
fi

# Check for WIP commit
commit=`git rev-list -n 1 --grep '^WIP' "$range"`
if [ -n "$commit" ]
then
echo >&2 "Found WIP commit in $local_ref, not pushing"
exit 1
fi
fi
done

exit 0

+ 169
- 0
pyanisort-git/pyanisort/hooks/pre-rebase.sample View File

@@ -0,0 +1,169 @@
#!/bin/sh
#
# Copyright (c) 2006, 2008 Junio C Hamano
#
# The "pre-rebase" hook is run just before "git rebase" starts doing
# its job, and can prevent the command from running by exiting with
# non-zero status.
#
# The hook is called with the following parameters:
#
# $1 -- the upstream the series was forked from.
# $2 -- the branch being rebased (or empty when rebasing the current branch).
#
# This sample shows how to prevent topic branches that are already
# merged to 'next' branch from getting rebased, because allowing it
# would result in rebasing already published history.

publish=next
basebranch="$1"
if test "$#" = 2
then
topic="refs/heads/$2"
else
topic=`git symbolic-ref HEAD` ||
exit 0 ;# we do not interrupt rebasing detached HEAD
fi

case "$topic" in
refs/heads/??/*)
;;
*)
exit 0 ;# we do not interrupt others.
;;
esac

# Now we are dealing with a topic branch being rebased
# on top of master. Is it OK to rebase it?

# Does the topic really exist?
git show-ref -q "$topic" || {
echo >&2 "No such branch $topic"
exit 1
}

# Is topic fully merged to master?
not_in_master=`git rev-list --pretty=oneline ^master "$topic"`
if test -z "$not_in_master"
then
echo >&2 "$topic is fully merged to master; better remove it."
exit 1 ;# we could allow it, but there is no point.
fi

# Is topic ever merged to next? If so you should not be rebasing it.
only_next_1=`git rev-list ^master "^$topic" ${publish} | sort`
only_next_2=`git rev-list ^master ${publish} | sort`
if test "$only_next_1" = "$only_next_2"
then
not_in_topic=`git rev-list "^$topic" master`
if test -z "$not_in_topic"
then
echo >&2 "$topic is already up-to-date with master"
exit 1 ;# we could allow it, but there is no point.
else
exit 0
fi
else
not_in_next=`git rev-list --pretty=oneline ^${publish} "$topic"`
/usr/bin/perl -e '
my $topic = $ARGV[0];
my $msg = "* $topic has commits already merged to public branch:\n";
my (%not_in_next) = map {
/^([0-9a-f]+) /;
($1 => 1);
} split(/\n/, $ARGV[1]);
for my $elem (map {
/^([0-9a-f]+) (.*)$/;
[$1 => $2];
} split(/\n/, $ARGV[2])) {
if (!exists $not_in_next{$elem->[0]}) {
if ($msg) {
print STDERR $msg;
undef $msg;
}
print STDERR " $elem->[1]\n";
}
}
' "$topic" "$not_in_next" "$not_in_master"
exit 1
fi

exit 0

################################################################

This sample hook safeguards topic branches that have been
published from being rewound.

The workflow assumed here is:

* Once a topic branch forks from "master", "master" is never
merged into it again (either directly or indirectly).

* Once a topic branch is fully cooked and merged into "master",
it is deleted. If you need to build on top of it to correct
earlier mistakes, a new topic branch is created by forking at
the tip of the "master". This is not strictly necessary, but
it makes it easier to keep your history simple.

* Whenever you need to test or publish your changes to topic
branches, merge them into "next" branch.

The script, being an example, hardcodes the publish branch name
to be "next", but it is trivial to make it configurable via
$GIT_DIR/config mechanism.

With this workflow, you would want to know:

(1) ... if a topic branch has ever been merged to "next". Young
topic branches can have stupid mistakes you would rather
clean up before publishing, and things that have not been
merged into other branches can be easily rebased without
affecting other people. But once it is published, you would
not want to rewind it.

(2) ... if a topic branch has been fully merged to "master".
Then you can delete it. More importantly, you should not
build on top of it -- other people may already want to
change things related to the topic as patches against your
"master", so if you need further changes, it is better to
fork the topic (perhaps with the same name) afresh from the
tip of "master".

Let's look at this example:

o---o---o---o---o---o---o---o---o---o "next"
/ / / /
/ a---a---b A / /
/ / / /
/ / c---c---c---c B /
/ / / \ /
/ / / b---b C \ /
/ / / / \ /
---o---o---o---o---o---o---o---o---o---o---o "master"


A, B and C are topic branches.

* A has one fix since it was merged up to "next".

* B has finished. It has been fully me