Merge branch 'release/0.10.0'
10
.gitignore
vendored
|
@ -1,5 +1,5 @@
|
|||
# SB User Related #
|
||||
######################
|
||||
# SB User Related #
|
||||
cache/*
|
||||
cache.db*
|
||||
config.ini*
|
||||
|
@ -11,18 +11,18 @@ server.crt
|
|||
server.key
|
||||
restore/
|
||||
|
||||
# SB Test Related #
|
||||
######################
|
||||
# SB Test Related #
|
||||
tests/Logs/*
|
||||
tests/sickbeard.*
|
||||
tests/cache.db
|
||||
|
||||
# Compiled source #
|
||||
######################
|
||||
# Compiled source #
|
||||
*.py[co]
|
||||
|
||||
# IDE specific #
|
||||
######################
|
||||
# IDE specific #
|
||||
*.bak
|
||||
*.tmp
|
||||
*.wpr
|
||||
|
@ -35,8 +35,8 @@ tests/cache.db
|
|||
Session.vim
|
||||
.ropeproject/*
|
||||
|
||||
# OS generated files #
|
||||
######################
|
||||
# OS generated files #
|
||||
.Spotlight-V100
|
||||
.Trashes
|
||||
.DS_Store
|
||||
|
|
|
@ -6,6 +6,8 @@ python:
|
|||
|
||||
install:
|
||||
- pip install cheetah
|
||||
- pip install coveralls
|
||||
|
||||
before_script: cd ./tests
|
||||
script: python all_tests.py
|
||||
script: coverage run --source=.. --omit=../lib/*,../tornado/* all_tests.py
|
||||
after_success: coveralls
|
114
CHANGES.md
|
@ -1,3 +1,115 @@
|
|||
### 0.10.0 (2015-08-06 11:05:00 UTC)
|
||||
|
||||
* Remove EZRSS provider
|
||||
* Update Tornado webserver to 4.2 (fdfaf3d)
|
||||
* Update change to suppress reporting of Tornado exception error 1 to updated package (ref:hacks.txt)
|
||||
* Update fix for API response header for JSON content type and the return of JSONP data to updated package (ref:hacks.txt)
|
||||
* Update Requests library 2.6.2 to 2.7.0 (8b5e457)
|
||||
* Update change to suppress HTTPS verification InsecureRequestWarning to updated package (ref:hacks.txt)
|
||||
* Change to consolidate cache database migration code
|
||||
* Change to only rebuild namecache on show update instead of on every search
|
||||
* Change to allow file moving across partition
|
||||
* Add removal of old entries from namecache on show deletion
|
||||
* Add Hallmark and specific ITV logos, remove logo of non-english Comedy Central Family
|
||||
* Fix provider TD failing to find episodes of air by date shows
|
||||
* Fix provider SCC failing to find episodes of air by date shows
|
||||
* Fix provider SCC searching propers
|
||||
* Fix provider SCC stop snatching releases for episodes already completed
|
||||
* Fix provider SCC handle null server responses
|
||||
* Change provider SCC remove 1 of 3 requests per search to save 30% time
|
||||
* Change provider SCC login process to use General Config/Advanced/Proxy host setting
|
||||
* Change provider SCD PEP8 and code convention cleanup
|
||||
* Change provider HDB code simplify and PEP8
|
||||
* Change provider IPT only decode unicode search strings
|
||||
* Change provider IPT login process to use General Config/Advanced/Proxy host setting
|
||||
* Change provider TB logo icon used on Config/Search Providers
|
||||
* Change provider TB PEP8 and code convention cleanup
|
||||
* Change provider TB login process to use General Config/Advanced/Proxy host setting
|
||||
* Remove useless webproxies from provider TPB as they fail for one reason or another
|
||||
* Change provider TPB to use mediaExtensions from common instead of hard-coded private list
|
||||
* Add new tld variants to provider TPB
|
||||
* Add test for authenticity to provider TPB to notify of 3rd party block
|
||||
* Change provider TD logo icon used on Config/Search Providers
|
||||
* Change provider TD login process to use General Config/Advanced/Proxy host setting
|
||||
* Change provider BTN code simplify and PEP8
|
||||
* Change provider BTS login process to use General Config/Advanced/Proxy host setting
|
||||
* Change provider FSH login process to use General Config/Advanced/Proxy host setting
|
||||
* Change provider RSS torrent code to use General Config/Advanced/Proxy host setting, simplify and PEP8
|
||||
* Change provider Wombles's PEP8 and code convention cleanup
|
||||
* Change provider Womble's use SSL
|
||||
* Change provider KAT remove dead url
|
||||
* Change provider KAT to use mediaExtensions from common instead of private list
|
||||
* Change provider KAT provider PEP8 and code convention cleanup
|
||||
* Change refactor and code simplification for torrent and newznab providers
|
||||
* Change refactor SCC to use torrent provider simplification and PEP8
|
||||
* Change refactor SCD to use torrent provider simplification
|
||||
* Change refactor TB to use torrent provider simplification and PEP8
|
||||
* Change refactor TBP to use torrent provider simplification and PEP8
|
||||
* Change refactor TD to use torrent provider simplification and PEP8
|
||||
* Change refactor TL to use torrent provider simplification and PEP8
|
||||
* Change refactor BTS to use torrent provider simplification and PEP8
|
||||
* Change refactor FSH to use torrent provider simplification and PEP8
|
||||
* Change refactor IPT to use torrent provider simplification and PEP8
|
||||
* Change refactor KAT to use torrent provider simplification and PEP8
|
||||
* Change refactor TOTV to use torrent provider simplification and PEP8
|
||||
* Remove HDTorrents torrent provider
|
||||
* Remove NextGen torrent provider
|
||||
* Add Rarbg torrent provider
|
||||
* Add MoreThan torrent provider
|
||||
* Add AlphaRatio torrent provider
|
||||
* Add PiSexy torrent provider
|
||||
* Add Strike torrent provider
|
||||
* Add TorrentShack torrent provider
|
||||
* Add BeyondHD torrent provider
|
||||
* Add GFTracker torrent provider
|
||||
* Add TtN torrent provider
|
||||
* Add GTI torrent provider
|
||||
* Fix getManualSearchStatus: object has no attribute 'segment'
|
||||
* Change handling of general HTTP error response codes to prevent issues
|
||||
* Add handling for CloudFlare custom HTTP response codes
|
||||
* Fix to correctly load local libraries instead of system installed libraries
|
||||
* Update PyNMA to hybrid v1.0
|
||||
* Change first run after install to set up the main db to the current schema instead of upgrading
|
||||
* Change don't create a backup from an initial zero byte main database file, PEP8 and code tidy up
|
||||
* Fix show list view when no shows exist and "Group show lists shows into" is set to anything other than "One Show List"
|
||||
* Fix fault matching air by date shows by using correct episode/season strings in find search results
|
||||
* Change add 'hevc', 'x265' and some langs to Config Search/Episode Search/Ignore result with any word
|
||||
* Change NotifyMyAndroid to its new web location
|
||||
* Update feedparser library 5.1.3 to 5.2.0 (8c62940)
|
||||
* Remove feedcache implementation and library
|
||||
* Add coverage testing and coveralls support
|
||||
* Add py2/3 regression testing for exception clauses
|
||||
* Change py2 exception clauses to py2/3 compatible clauses
|
||||
* Change py2 print statements to py2/3 compatible functions
|
||||
* Change py2 octal literals into the new py2/3 syntax
|
||||
* Change py2 iteritems to py2/3 compatible statements using six library
|
||||
* Change py2 queue, httplib, cookielib and xmlrpclib to py2/3 compatible calls using six
|
||||
* Change py2 file and reload functions to py2/3 compatible open and reload_module functions
|
||||
* Change Kodi notifier to use requests as opposed to urllib
|
||||
* Change to consolidate scene exceptions and name cache code
|
||||
* Change check_url function to use requests instead of httplib library
|
||||
* Update Six compatibility library 1.5.2 to 1.9.0 (8a545f4)
|
||||
* Update SimpleJSON library 2.0.9 to 3.7.3 (0bcdf20)
|
||||
* Update xmltodict library 0.9.0 to 0.9.2 (579a005)
|
||||
* Update dateutil library 2.2 to 2.4.2 (a6b8925)
|
||||
* Update ConfigObj library 4.6.0 to 5.1.0 (a68530a)
|
||||
* Update Beautiful Soup to 4.3.2 (r353)
|
||||
* Update jsonrpclib library r20 to (b59217c)
|
||||
* Change cachecontrol library to ensure cache file exists before attempting delete
|
||||
* Fix saving root dirs
|
||||
* Change pushbullet from urllib2 to requests
|
||||
* Change to make pushbullet error messages clearer
|
||||
* Change pyNMA use of urllib to requests (ref:hacks.txt)
|
||||
* Change Trakt url to fix baseline uses (e.g. add from trending)
|
||||
* Fix edit on show page for shows that have anime enabled in mass edit
|
||||
* Fix issue parsing items in ToktoToshokan provider
|
||||
* Change to only show option "End upgrade on first match" on edit show page if quality custom is selected
|
||||
* Change label "Show is grouped in" in edit show page to "Show is in group" and move the section higher
|
||||
* Fix post processing of anime with version tags
|
||||
* Change accept SD titles that contain audio quality
|
||||
* Change readme.md
|
||||
|
||||
|
||||
### 0.9.1 (2015-05-25 03:03:00 UTC)
|
||||
|
||||
* Fix erroneous multiple downloads of torrent files which causes snatches to fail under certain conditions
|
||||
|
@ -64,7 +176,7 @@
|
|||
* Change disable the Force buttons on the Manage Searches page while a search is running
|
||||
* Change staggered periods of testing and updating of all shows "ended" status up to 460 days
|
||||
* Change "Archive" to "Upgrade to" in Edit show and other places and improve related texts for clarity
|
||||
* Fix history consolidation to only update an episode status if the history disagrees with the status.
|
||||
* Fix history consolidation to only update an episode status if the history disagrees with the status
|
||||
|
||||
|
||||
### 0.8.3 (2015-04-25 08:48:00 UTC)
|
||||
|
|
|
@ -3,3 +3,5 @@ Libs with customisations...
|
|||
/tornado
|
||||
/lib/requests/packages/urllib3/connectionpool.py
|
||||
/lib/requests/packages/urllib3/util/ssl_.py
|
||||
/lib/cachecontrol/caches/file_cache.py
|
||||
/lib/pynma/pynma.py
|
52
SickBeard.py
|
@ -18,6 +18,7 @@
|
|||
# along with SickGear. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
# Check needed software dependencies to nudge users to fix their setup
|
||||
from __future__ import print_function
|
||||
from __future__ import with_statement
|
||||
|
||||
import time
|
||||
|
@ -32,7 +33,7 @@ import threading
|
|||
import getopt
|
||||
|
||||
if sys.version_info < (2, 6):
|
||||
print 'Sorry, requires Python 2.6 or 2.7.'
|
||||
print('Sorry, requires Python 2.6 or 2.7.')
|
||||
sys.exit(1)
|
||||
|
||||
try:
|
||||
|
@ -41,13 +42,14 @@ try:
|
|||
if Cheetah.Version[0] != '2':
|
||||
raise ValueError
|
||||
except ValueError:
|
||||
print 'Sorry, requires Python module Cheetah 2.1.0 or newer.'
|
||||
print('Sorry, requires Python module Cheetah 2.1.0 or newer.')
|
||||
sys.exit(1)
|
||||
except:
|
||||
print 'The Python module Cheetah is required'
|
||||
print('The Python module Cheetah is required')
|
||||
sys.exit(1)
|
||||
|
||||
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), 'lib')))
|
||||
sys.path.insert(1, os.path.abspath(os.path.join(os.path.dirname(__file__), 'lib')))
|
||||
from lib.six import moves
|
||||
|
||||
# We only need this for compiling an EXE and I will just always do that on 2.6+
|
||||
if sys.hexversion >= 0x020600F0:
|
||||
|
@ -139,15 +141,15 @@ class SickGear(object):
|
|||
sickbeard.SYS_ENCODING = 'UTF-8'
|
||||
|
||||
if not hasattr(sys, 'setdefaultencoding'):
|
||||
reload(sys)
|
||||
moves.reload_module(sys)
|
||||
|
||||
try:
|
||||
# pylint: disable=E1101
|
||||
# On non-unicode builds this will raise an AttributeError, if encoding type is not valid it throws a LookupError
|
||||
sys.setdefaultencoding(sickbeard.SYS_ENCODING)
|
||||
except:
|
||||
print 'Sorry, you MUST add the SickGear folder to the PYTHONPATH environment variable'
|
||||
print 'or find another way to force Python to use %s for string encoding.' % sickbeard.SYS_ENCODING
|
||||
print('Sorry, you MUST add the SickGear folder to the PYTHONPATH environment variable')
|
||||
print('or find another way to force Python to use %s for string encoding.' % sickbeard.SYS_ENCODING)
|
||||
sys.exit(1)
|
||||
|
||||
# Need console logging for SickBeard.py and SickBeard-console.exe
|
||||
|
@ -231,7 +233,7 @@ class SickGear(object):
|
|||
|
||||
else:
|
||||
if self.consoleLogging:
|
||||
print u'Not running in daemon mode. PID file creation disabled'
|
||||
print(u'Not running in daemon mode. PID file creation disabled')
|
||||
|
||||
self.CREATEPID = False
|
||||
|
||||
|
@ -242,7 +244,7 @@ class SickGear(object):
|
|||
# Make sure that we can create the data dir
|
||||
if not os.access(sickbeard.DATA_DIR, os.F_OK):
|
||||
try:
|
||||
os.makedirs(sickbeard.DATA_DIR, 0744)
|
||||
os.makedirs(sickbeard.DATA_DIR, 0o744)
|
||||
except os.error:
|
||||
sys.exit(u'Unable to create data directory: %s Exiting.' % sickbeard.DATA_DIR)
|
||||
|
||||
|
@ -260,11 +262,11 @@ class SickGear(object):
|
|||
os.chdir(sickbeard.DATA_DIR)
|
||||
|
||||
if self.consoleLogging:
|
||||
print u'Starting up SickGear from %s' % sickbeard.CONFIG_FILE
|
||||
print(u'Starting up SickGear from %s' % sickbeard.CONFIG_FILE)
|
||||
|
||||
# Load the config and publish it to the sickbeard package
|
||||
if not os.path.isfile(sickbeard.CONFIG_FILE):
|
||||
print u'Unable to find "%s", all settings will be default!' % sickbeard.CONFIG_FILE
|
||||
print(u'Unable to find "%s", all settings will be default!' % sickbeard.CONFIG_FILE)
|
||||
|
||||
sickbeard.CFG = ConfigObj(sickbeard.CONFIG_FILE)
|
||||
|
||||
|
@ -272,12 +274,12 @@ class SickGear(object):
|
|||
|
||||
if CUR_DB_VERSION > 0:
|
||||
if CUR_DB_VERSION < MIN_DB_VERSION:
|
||||
print u'Your database version (%s) is too old to migrate from with this version of SickGear' \
|
||||
% CUR_DB_VERSION
|
||||
print(u'Your database version (%s) is too old to migrate from with this version of SickGear' \
|
||||
% CUR_DB_VERSION)
|
||||
sys.exit(u'Upgrade using a previous version of SG first, or start with no database file to begin fresh')
|
||||
if CUR_DB_VERSION > MAX_DB_VERSION:
|
||||
print u'Your database version (%s) has been incremented past what this version of SickGear supports' \
|
||||
% CUR_DB_VERSION
|
||||
print(u'Your database version (%s) has been incremented past what this version of SickGear supports' \
|
||||
% CUR_DB_VERSION)
|
||||
sys.exit(
|
||||
u'If you have used other forks of SG, your database may be unusable due to their modifications')
|
||||
|
||||
|
@ -361,6 +363,10 @@ class SickGear(object):
|
|||
# refresh network timezones
|
||||
network_timezones.update_network_dict()
|
||||
|
||||
# load all ids from xem
|
||||
startup_background_tasks = threading.Thread(name='FETCH-XEMDATA', target=sickbeard.scene_exceptions.get_xem_ids)
|
||||
startup_background_tasks.start()
|
||||
|
||||
# sure, why not?
|
||||
if sickbeard.USE_FAILED_DOWNLOADS:
|
||||
failed_history.trimHistory()
|
||||
|
@ -387,7 +393,7 @@ class SickGear(object):
|
|||
pid = os.fork() # @UndefinedVariable - only available in UNIX
|
||||
if pid != 0:
|
||||
os._exit(0)
|
||||
except OSError, e:
|
||||
except OSError as e:
|
||||
sys.stderr.write('fork #1 failed: %d (%s)\n' % (e.errno, e.strerror))
|
||||
sys.exit(1)
|
||||
|
||||
|
@ -402,7 +408,7 @@ class SickGear(object):
|
|||
pid = os.fork() # @UndefinedVariable - only available in UNIX
|
||||
if pid != 0:
|
||||
os._exit(0)
|
||||
except OSError, e:
|
||||
except OSError as e:
|
||||
sys.stderr.write('fork #2 failed: %d (%s)\n' % (e.errno, e.strerror))
|
||||
sys.exit(1)
|
||||
|
||||
|
@ -411,8 +417,8 @@ class SickGear(object):
|
|||
pid = str(os.getpid())
|
||||
logger.log(u'Writing PID: %s to %s' % (pid, self.PIDFILE))
|
||||
try:
|
||||
file(self.PIDFILE, 'w').write('%s\n' % pid)
|
||||
except IOError, e:
|
||||
open(self.PIDFILE, 'w').write('%s\n' % pid)
|
||||
except IOError as e:
|
||||
logger.log_error_and_exit(
|
||||
u'Unable to write PID file: %s Error: %s [%s]' % (self.PIDFILE, e.strerror, e.errno))
|
||||
|
||||
|
@ -421,9 +427,9 @@ class SickGear(object):
|
|||
sys.stderr.flush()
|
||||
|
||||
devnull = getattr(os, 'devnull', '/dev/null')
|
||||
stdin = file(devnull, 'r')
|
||||
stdout = file(devnull, 'a+')
|
||||
stderr = file(devnull, 'a+')
|
||||
stdin = open(devnull, 'r')
|
||||
stdout = open(devnull, 'a+')
|
||||
stderr = open(devnull, 'a+')
|
||||
os.dup2(stdin.fileno(), sys.stdin.fileno())
|
||||
os.dup2(stdout.fileno(), sys.stdout.fileno())
|
||||
os.dup2(stderr.fileno(), sys.stderr.fileno())
|
||||
|
@ -456,7 +462,7 @@ class SickGear(object):
|
|||
curShow = TVShow(int(sqlShow['indexer']), int(sqlShow['indexer_id']))
|
||||
curShow.nextEpisode()
|
||||
sickbeard.showList.append(curShow)
|
||||
except Exception, e:
|
||||
except Exception as e:
|
||||
logger.log(
|
||||
u'There was an error creating the show in %s: %s' % (sqlShow['location'], str(e).decode('utf-8',
|
||||
'replace')),
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
[SickBeard]
|
||||
host=localhost
|
||||
port=8081
|
||||
username=
|
||||
password=
|
||||
web_root=
|
||||
[SickBeard]
|
||||
host=localhost
|
||||
port=8081
|
||||
username=
|
||||
password=
|
||||
web_root=
|
||||
ssl=0
|
|
@ -18,14 +18,15 @@
|
|||
# You should have received a copy of the GNU General Public License
|
||||
# along with SickGear. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
from __future__ import print_function
|
||||
from __future__ import with_statement
|
||||
|
||||
import os.path
|
||||
import sys
|
||||
|
||||
sickbeardPath = os.path.split(os.path.split(sys.argv[0])[0])[0]
|
||||
sys.path.append(os.path.join(sickbeardPath, 'lib'))
|
||||
sys.path.append(sickbeardPath)
|
||||
sys.path.insert(1, os.path.join(sickbeardPath, 'lib'))
|
||||
sys.path.insert(1, sickbeardPath)
|
||||
|
||||
try:
|
||||
import requests
|
||||
|
@ -132,7 +133,7 @@ def processEpisode(dir_to_process, org_NZB_name=None, status=None):
|
|||
sess.post(login_url, data={'username': username, 'password': password}, stream=True, verify=False)
|
||||
result = sess.get(url, params=params, stream=True, verify=False)
|
||||
if result.status_code == 401:
|
||||
print 'Verify and use correct username and password in autoProcessTV.cfg'
|
||||
print('Verify and use correct username and password in autoProcessTV.cfg')
|
||||
else:
|
||||
for line in result.iter_lines():
|
||||
if line:
|
||||
|
|
|
@ -19,12 +19,13 @@
|
|||
# along with SickGear. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
|
||||
from __future__ import print_function
|
||||
import sys
|
||||
|
||||
import autoProcessTV
|
||||
|
||||
if len(sys.argv) < 4:
|
||||
print 'No folder supplied - is this being called from HellaVCR?'
|
||||
print('No folder supplied - is this being called from HellaVCR?')
|
||||
sys.exit()
|
||||
else:
|
||||
autoProcessTV.processEpisode(sys.argv[3], sys.argv[2])
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
#!/usr/bin/env python2
|
||||
from __future__ import print_function
|
||||
import sys
|
||||
import os
|
||||
import time
|
||||
|
@ -6,8 +7,8 @@ import ConfigParser
|
|||
import logging
|
||||
|
||||
sickbeardPath = os.path.split(os.path.split(sys.argv[0])[0])[0]
|
||||
sys.path.append(os.path.join(sickbeardPath, 'lib'))
|
||||
sys.path.append(sickbeardPath)
|
||||
sys.path.insert(1, os.path.join(sickbeardPath, 'lib'))
|
||||
sys.path.insert(1, sickbeardPath)
|
||||
configFilename = os.path.join(sickbeardPath, 'config.ini')
|
||||
|
||||
try:
|
||||
|
@ -22,9 +23,11 @@ try:
|
|||
fp = open(configFilename, 'r')
|
||||
config.readfp(fp)
|
||||
fp.close()
|
||||
except IOError, e:
|
||||
print 'Could not find/read Sickbeard config.ini: ' + str(e)
|
||||
print 'Possibly wrong mediaToSickbeard.py location. Ensure the file is in the autoProcessTV subdir of your Sickbeard installation'
|
||||
except IOError as e:
|
||||
print('Could not find/read Sickbeard config.ini: ' + str(e))
|
||||
print(
|
||||
'Possibly wrong mediaToSickbeard.py location. Ensure the file is in the autoProcessTV subdir of your Sickbeard '
|
||||
'installation')
|
||||
time.sleep(3)
|
||||
sys.exit(1)
|
||||
|
||||
|
@ -41,7 +44,7 @@ logfile = os.path.join(logdir, 'sickbeard.log')
|
|||
try:
|
||||
handler = logging.FileHandler(logfile)
|
||||
except:
|
||||
print 'Unable to open/create the log file at ' + logfile
|
||||
print('Unable to open/create the log file at ' + logfile)
|
||||
time.sleep(3)
|
||||
sys.exit()
|
||||
|
||||
|
@ -53,7 +56,7 @@ def utorrent():
|
|||
# print 'Calling utorrent'
|
||||
if len(sys.argv) < 2:
|
||||
scriptlogger.error('No folder supplied - is this being called from uTorrent?')
|
||||
print 'No folder supplied - is this being called from uTorrent?'
|
||||
print('No folder supplied - is this being called from uTorrent?')
|
||||
time.sleep(3)
|
||||
sys.exit()
|
||||
|
||||
|
@ -73,7 +76,7 @@ def deluge():
|
|||
|
||||
if len(sys.argv) < 4:
|
||||
scriptlogger.error('No folder supplied - is this being called from Deluge?')
|
||||
print 'No folder supplied - is this being called from Deluge?'
|
||||
print('No folder supplied - is this being called from Deluge?')
|
||||
time.sleep(3)
|
||||
sys.exit()
|
||||
|
||||
|
@ -86,7 +89,7 @@ def blackhole():
|
|||
|
||||
if None != os.getenv('TR_TORRENT_DIR'):
|
||||
scriptlogger.debug('Processing script triggered by Transmission')
|
||||
print 'Processing script triggered by Transmission'
|
||||
print('Processing script triggered by Transmission')
|
||||
scriptlogger.debug(u'TR_TORRENT_DIR: ' + os.getenv('TR_TORRENT_DIR'))
|
||||
scriptlogger.debug(u'TR_TORRENT_NAME: ' + os.getenv('TR_TORRENT_NAME'))
|
||||
dirName = os.getenv('TR_TORRENT_DIR')
|
||||
|
@ -94,7 +97,7 @@ def blackhole():
|
|||
else:
|
||||
if len(sys.argv) < 2:
|
||||
scriptlogger.error('No folder supplied - Your client should invoke the script with a Dir and a Relese Name')
|
||||
print 'No folder supplied - Your client should invoke the script with a Dir and a Release Name'
|
||||
print('No folder supplied - Your client should invoke the script with a Dir and a Release Name')
|
||||
time.sleep(3)
|
||||
sys.exit()
|
||||
|
||||
|
@ -127,13 +130,13 @@ def main():
|
|||
|
||||
if not use_torrents:
|
||||
scriptlogger.error(u'Enable Use Torrent on Sickbeard to use this Script. Aborting!')
|
||||
print u'Enable Use Torrent on Sickbeard to use this Script. Aborting!'
|
||||
print(u'Enable Use Torrent on Sickbeard to use this Script. Aborting!')
|
||||
time.sleep(3)
|
||||
sys.exit()
|
||||
|
||||
if not torrent_method in ['utorrent', 'transmission', 'deluge', 'blackhole']:
|
||||
scriptlogger.error(u'Unknown Torrent Method. Aborting!')
|
||||
print u'Unknown Torrent Method. Aborting!'
|
||||
print(u'Unknown Torrent Method. Aborting!')
|
||||
time.sleep(3)
|
||||
sys.exit()
|
||||
|
||||
|
@ -141,13 +144,13 @@ def main():
|
|||
|
||||
if dirName is None:
|
||||
scriptlogger.error(u'MediaToSickbeard script need a dir to be run. Aborting!')
|
||||
print u'MediaToSickbeard script need a dir to be run. Aborting!'
|
||||
print(u'MediaToSickbeard script need a dir to be run. Aborting!')
|
||||
time.sleep(3)
|
||||
sys.exit()
|
||||
|
||||
if not os.path.isdir(dirName):
|
||||
scriptlogger.error(u'Folder ' + dirName + ' does not exist. Aborting AutoPostProcess.')
|
||||
print u'Folder ' + dirName + ' does not exist. Aborting AutoPostProcess.'
|
||||
print(u'Folder ' + dirName + ' does not exist. Aborting AutoPostProcess.')
|
||||
time.sleep(3)
|
||||
sys.exit()
|
||||
|
||||
|
@ -174,26 +177,26 @@ def main():
|
|||
login_url = protocol + host + ':' + port + web_root + '/login'
|
||||
|
||||
scriptlogger.debug('Opening URL: ' + url + ' with params=' + str(params))
|
||||
print 'Opening URL: ' + url + ' with params=' + str(params)
|
||||
print('Opening URL: ' + url + ' with params=' + str(params))
|
||||
|
||||
try:
|
||||
sess = requests.Session()
|
||||
sess.post(login_url, data={'username': username, 'password': password}, stream=True, verify=False)
|
||||
response = sess.get(url, auth=(username, password), params=params, verify=False, allow_redirects=False)
|
||||
except Exception, e:
|
||||
except Exception as e:
|
||||
scriptlogger.error(u': Unknown exception raised when opening url: ' + str(e))
|
||||
time.sleep(3)
|
||||
sys.exit()
|
||||
|
||||
if response.status_code == 401:
|
||||
scriptlogger.error(u'Verify and use correct username and password in autoProcessTV.cfg')
|
||||
print 'Verify and use correct username and password in autoProcessTV.cfg'
|
||||
print('Verify and use correct username and password in autoProcessTV.cfg')
|
||||
time.sleep(3)
|
||||
sys.exit()
|
||||
|
||||
if response.status_code == 200:
|
||||
scriptlogger.info(u'Script ' + __file__ + ' Succesfull')
|
||||
print 'Script ' + __file__ + ' Succesfull'
|
||||
print('Script ' + __file__ + ' Succesfull')
|
||||
time.sleep(3)
|
||||
sys.exit()
|
||||
|
||||
|
|
|
@ -19,11 +19,12 @@
|
|||
# along with SickGear. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
|
||||
from __future__ import print_function
|
||||
import sys
|
||||
import autoProcessTV
|
||||
|
||||
if len(sys.argv) < 2:
|
||||
print 'No folder supplied - is this being called from SABnzbd?'
|
||||
print('No folder supplied - is this being called from SABnzbd?')
|
||||
sys.exit()
|
||||
elif len(sys.argv) >= 8:
|
||||
autoProcessTV.processEpisode(sys.argv[1], sys.argv[2], sys.argv[7])
|
||||
|
|
|
@ -1,13 +1,17 @@
|
|||
from distutils.core import setup
|
||||
import py2exe, sys, shutil
|
||||
import sys
|
||||
import shutil
|
||||
try:
|
||||
import py2exe
|
||||
except:
|
||||
pass
|
||||
|
||||
sys.argv.append('py2exe')
|
||||
|
||||
setup(
|
||||
options = {'py2exe': {'bundle_files': 1}},
|
||||
# windows = [{'console': "sabToSickbeard.py"}],
|
||||
zipfile = None,
|
||||
console = ['sabToSickbeard.py'],
|
||||
)
|
||||
setup(options={'py2exe': {'bundle_files': 1}},
|
||||
# windows = [{'console': "sabToSickbeard.py"}],
|
||||
zipfile=None,
|
||||
console=['sabToSickbeard.py']
|
||||
)
|
||||
|
||||
shutil.copy('dist/sabToSickbeard.exe', '.')
|
||||
|
|
125
contributing.md
|
@ -1,125 +0,0 @@
|
|||
### Questions about SickGear?
|
||||
|
||||
To get your questions answered, please ask in the [SickGear Forum], on IRC \#SickGear pn freenode.net, or webchat.
|
||||
|
||||
# Contributing to SickGear
|
||||
|
||||
1. [Getting Involved](#getting-involved)
|
||||
2. [How To Report Bugs](#how-to-report-bugs)
|
||||
3. [Tips For Submitting Code](#tips-for-submitting-code)
|
||||
|
||||
|
||||
## Getting Involved
|
||||
|
||||
There are a number of ways to get involved with the development of SickGear. Even if you've never contributed code to an Open Source project before, we're always looking for help identifying bugs, cleaning up code, writing documentation and testing.
|
||||
|
||||
The goal of this guide is to provide the best way to contribute to the official SickGear repository. Please read through the full guide detailing [How to Report Bugs](#how-to-report-bugs).
|
||||
|
||||
## Discussion
|
||||
|
||||
### Issues and IRC
|
||||
|
||||
If you think you've found a bug please [file it in the bug tracker](#how-to-report-bugs).
|
||||
|
||||
Additionally most of the SickGear development team can be found in the [#SickGear](http://webchat.freenode.net/?channels=SickGear) IRC channel on irc.freenode.net.
|
||||
|
||||
|
||||
## How to Report Bugs
|
||||
|
||||
### Make sure it is a SickGear bug
|
||||
|
||||
Many bugs reported are actually issues with the user mis-understanding of how something works (there are a bit of moving parts to an ideal setup) and most of the time can be fixed by just changing some settings to fit the users needs.
|
||||
|
||||
If you are new to SickGear, it is usually a much better idea to ask for help first in the [SickGear IRC channel](http://webchat.freenode.net/?channels=SickGear). You will get much quicker support, and you will help avoid tying up the SickGear team with invalid bug reports.
|
||||
|
||||
### Try the latest version of SickGear
|
||||
|
||||
Bugs in old versions of SickGear may have already been fixed. In order to avoid reporting known issues, make sure you are always testing against the latest build/source. Also, we put new code in the `dev` branch first before pushing down to the `master` branch (which is what the binary builds are built off of).
|
||||
|
||||
|
||||
## Tips For Submitting Code
|
||||
|
||||
|
||||
### Code
|
||||
|
||||
**NEVER write your patches to the master branch** - it gets messy (I say this from experience!)
|
||||
|
||||
**ALWAYS USE A "TOPIC" BRANCH!** Personally I like the `branch-feature_name` format that way its easy to identify the branch and feature at a glance. Also please make note of any forum post / issue number in the pull commit so we know what you are solving (it helps with cleaning up the related items later).
|
||||
|
||||
|
||||
Please follow these guidelines before reporting a bug:
|
||||
|
||||
1. **Update to the latest version** — Check if you can reproduce the issue with the latest version from the `dev` branch.
|
||||
|
||||
2. **Use the SickGear Forums search** — check if the issue has already been reported. If it has been, please comment on the existing issue.
|
||||
|
||||
3. **Provide a means to reproduce the problem** — Please provide as much details as possible, e.g. SickGear log files (obfuscate apikey/passwords), browser and operating system versions, how you started SickGear, and of course the steps to reproduce the problem. Bugs are always reported in the forums.
|
||||
|
||||
|
||||
### Feature requests
|
||||
|
||||
Please follow the bug guidelines above for feature requests, i.e. update to the latest version and search for existing issues before posting a new request. You can submit Feature Requests in the [SickGear Forum] as well.
|
||||
|
||||
### Pull requests
|
||||
|
||||
[Pull requests](https://help.github.com/articles/using-pull-requests) are welcome and the preferred way of accepting code contributions.
|
||||
|
||||
Please follow these guidelines before sending a pull request:
|
||||
|
||||
1. Update your fork to the latest upstream version.
|
||||
|
||||
2. Use the `dev` branch to base your code off of. Create a topic-branch for your work. We will not merge your 'dev' branch, or your 'master' branch, only topic branches, coming from dev are merged.
|
||||
|
||||
3. Follow the coding conventions of the original repository. Do not change line endings of the existing file, as this will rewrite the file and loses history.
|
||||
|
||||
4. Keep your commits as autonomous as possible, i.e. create a new commit for every single bug fix or feature added.
|
||||
|
||||
5. Always add meaningful commit messages. We should not have to guess at what your code is supposed to do.
|
||||
|
||||
6. One pull request per feature. If you want multiple features, send multiple PR's
|
||||
|
||||
Please follow this process; it's the best way to get your work included in the project:
|
||||
|
||||
- [Fork](http://help.github.com/fork-a-repo/) the project, clone your fork,
|
||||
and configure the remotes:
|
||||
|
||||
```bash
|
||||
# clone your fork of the repo into the current directory in terminal
|
||||
git clone git@github.com:<your username>/SickGear.git
|
||||
# navigate to the newly cloned directory
|
||||
cd SickGear
|
||||
# assign the original repo to a remote called "upstream"
|
||||
git remote add upstream https://github.com/SickGear/SickGear.git
|
||||
```
|
||||
|
||||
- If you cloned a while ago, get the latest changes from upstream:
|
||||
|
||||
```bash
|
||||
# fetch upstream changes
|
||||
git fetch upstream
|
||||
# make sure you are on your 'master' branch
|
||||
git checkout master
|
||||
# merge upstream changes
|
||||
git merge upstream/master
|
||||
```
|
||||
|
||||
- Create a new topic branch to contain your feature, change, or fix:
|
||||
|
||||
```bash
|
||||
git checkout -b <topic-branch-name> dev
|
||||
```
|
||||
|
||||
- Commit your changes in logical chunks. or your pull request is unlikely
|
||||
be merged into the main project. Use git's
|
||||
[interactive rebase](https://help.github.com/articles/interactive-rebase)
|
||||
feature to tidy up your commits before making them public.
|
||||
|
||||
- Push your topic branch up to your fork:
|
||||
|
||||
```bash
|
||||
git push origin <topic-branch-name>
|
||||
```
|
||||
|
||||
- [Open a Pull Request](https://help.github.com/articles/using-pull-requests) with a
|
||||
clear title and description.
|
||||
|
|
@ -507,6 +507,7 @@ inc_bottom.tmpl
|
|||
width:100%;
|
||||
padding:20px 0;
|
||||
text-align:center;
|
||||
clear:both;
|
||||
font-size:12px
|
||||
}
|
||||
|
||||
|
@ -1009,6 +1010,10 @@ div.formpaginate{
|
|||
font-weight:900
|
||||
}
|
||||
|
||||
#addShowForm #blackwhitelist{
|
||||
padding:0 0 0 15px
|
||||
}
|
||||
|
||||
#addShowForm #blackwhitelist,
|
||||
#addShowForm #blackwhitelist h4,
|
||||
#addShowForm #blackwhitelist p{
|
||||
|
|
Before Width: | Height: | Size: 4.9 KiB |
BIN
gui/slick/images/network/hallmark channel.png
Normal file
After Width: | Height: | Size: 2.8 KiB |
Before Width: | Height: | Size: 4.9 KiB After Width: | Height: | Size: 1.4 KiB |
Before Width: | Height: | Size: 4.9 KiB After Width: | Height: | Size: 1.6 KiB |
Before Width: | Height: | Size: 4.9 KiB After Width: | Height: | Size: 1.5 KiB |
BIN
gui/slick/images/providers/alpharatio.png
Normal file
After Width: | Height: | Size: 664 B |
BIN
gui/slick/images/providers/beyondhd.png
Normal file
After Width: | Height: | Size: 562 B |
BIN
gui/slick/images/providers/gftracker.png
Normal file
After Width: | Height: | Size: 886 B |
BIN
gui/slick/images/providers/grabtheinfo.png
Normal file
After Width: | Height: | Size: 1,011 B |
Before Width: | Height: | Size: 3.2 KiB |
BIN
gui/slick/images/providers/morethan.png
Normal file
After Width: | Height: | Size: 398 B |
Before Width: | Height: | Size: 243 B |
BIN
gui/slick/images/providers/pisexy.png
Normal file
After Width: | Height: | Size: 517 B |
BIN
gui/slick/images/providers/rarbg.png
Normal file
After Width: | Height: | Size: 635 B |
BIN
gui/slick/images/providers/strike.png
Normal file
After Width: | Height: | Size: 417 B |
Before Width: | Height: | Size: 3.7 KiB After Width: | Height: | Size: 3.5 KiB |
BIN
gui/slick/images/providers/torrent.png
Normal file
After Width: | Height: | Size: 916 B |
Before Width: | Height: | Size: 3.2 KiB After Width: | Height: | Size: 450 B |
Before Width: | Height: | Size: 816 B After Width: | Height: | Size: 441 B |
BIN
gui/slick/images/providers/torrentshack.png
Normal file
After Width: | Height: | Size: 861 B |
BIN
gui/slick/images/providers/transmithe_net.png
Normal file
After Width: | Height: | Size: 595 B |
|
@ -41,7 +41,7 @@
|
|||
<label class="cleafix" for="use_xbmc">
|
||||
<span class="component-title">Enable</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="use_xbmc" id="use_xbmc" #if $sickbeard.USE_XBMC then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="use_xbmc" id="use_xbmc" #if $sickbeard.USE_XBMC then 'checked="checked"' else ''# />
|
||||
<p>should SickGear send XBMC commands ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -52,7 +52,7 @@
|
|||
<label for="xbmc_always_on">
|
||||
<span class="component-title">Always on</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="xbmc_always_on" id="xbmc_always_on" #if $sickbeard.XBMC_ALWAYS_ON then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="xbmc_always_on" id="xbmc_always_on" #if $sickbeard.XBMC_ALWAYS_ON then 'checked="checked"' else ''# />
|
||||
<p>log errors when unreachable ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -61,7 +61,7 @@
|
|||
<label for="xbmc_notify_onsnatch">
|
||||
<span class="component-title">Notify on snatch</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="xbmc_notify_onsnatch" id="xbmc_notify_onsnatch" #if $sickbeard.XBMC_NOTIFY_ONSNATCH then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="xbmc_notify_onsnatch" id="xbmc_notify_onsnatch" #if $sickbeard.XBMC_NOTIFY_ONSNATCH then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download starts ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -70,7 +70,7 @@
|
|||
<label for="xbmc_notify_ondownload">
|
||||
<span class="component-title">Notify on download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="xbmc_notify_ondownload" id="xbmc_notify_ondownload" #if $sickbeard.XBMC_NOTIFY_ONDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="xbmc_notify_ondownload" id="xbmc_notify_ondownload" #if $sickbeard.XBMC_NOTIFY_ONDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download finishes ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -79,7 +79,7 @@
|
|||
<label for="xbmc_notify_onsubtitledownload">
|
||||
<span class="component-title">Notify on subtitle download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="xbmc_notify_onsubtitledownload" id="xbmc_notify_onsubtitledownload" #if $sickbeard.XBMC_NOTIFY_ONSUBTITLEDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="xbmc_notify_onsubtitledownload" id="xbmc_notify_onsubtitledownload" #if $sickbeard.XBMC_NOTIFY_ONSUBTITLEDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when subtitles are downloaded ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -88,7 +88,7 @@
|
|||
<label for="xbmc_update_library">
|
||||
<span class="component-title">Update library</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="xbmc_update_library" id="xbmc_update_library" #if $sickbeard.XBMC_UPDATE_LIBRARY then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="xbmc_update_library" id="xbmc_update_library" #if $sickbeard.XBMC_UPDATE_LIBRARY then 'checked="checked"' else ''# />
|
||||
<p>update XBMC library when a download finishes ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -97,7 +97,7 @@
|
|||
<label for="xbmc_update_full">
|
||||
<span class="component-title">Full library update</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="xbmc_update_full" id="xbmc_update_full" #if $sickbeard.XBMC_UPDATE_FULL then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="xbmc_update_full" id="xbmc_update_full" #if $sickbeard.XBMC_UPDATE_FULL then 'checked="checked"' else ''# />
|
||||
<p>perform a full library update if update per-show fails ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -106,7 +106,7 @@
|
|||
<label for="xbmc_update_onlyfirst">
|
||||
<span class="component-title">Only update first host</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="xbmc_update_onlyfirst" id="xbmc_update_onlyfirst" #if $sickbeard.XBMC_UPDATE_ONLYFIRST then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="xbmc_update_onlyfirst" id="xbmc_update_onlyfirst" #if $sickbeard.XBMC_UPDATE_ONLYFIRST then 'checked="checked"' else ''# />
|
||||
<p>only send library updates to the first active host ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -165,7 +165,7 @@
|
|||
<label class="cleafix" for="use_kodi">
|
||||
<span class="component-title">Enable</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="use_kodi" id="use_kodi" #if $sickbeard.USE_KODI then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="use_kodi" id="use_kodi" #if $sickbeard.USE_KODI then 'checked="checked"' else ''# />
|
||||
<p>should SickGear send Kodi commands ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -175,7 +175,7 @@
|
|||
<label for="kodi_always_on">
|
||||
<span class="component-title">Always on</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="kodi_always_on" id="kodi_always_on" #if $sickbeard.KODI_ALWAYS_ON then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="kodi_always_on" id="kodi_always_on" #if $sickbeard.KODI_ALWAYS_ON then 'checked="checked"' else ''# />
|
||||
<p>log errors when unreachable ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -184,7 +184,7 @@
|
|||
<label for="kodi_notify_onsnatch">
|
||||
<span class="component-title">Notify on snatch</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="kodi_notify_onsnatch" id="kodi_notify_onsnatch" #if $sickbeard.KODI_NOTIFY_ONSNATCH then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="kodi_notify_onsnatch" id="kodi_notify_onsnatch" #if $sickbeard.KODI_NOTIFY_ONSNATCH then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download starts ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -193,7 +193,7 @@
|
|||
<label for="kodi_notify_ondownload">
|
||||
<span class="component-title">Notify on download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="kodi_notify_ondownload" id="kodi_notify_ondownload" #if $sickbeard.KODI_NOTIFY_ONDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="kodi_notify_ondownload" id="kodi_notify_ondownload" #if $sickbeard.KODI_NOTIFY_ONDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download finishes ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -202,7 +202,7 @@
|
|||
<label for="kodi_notify_onsubtitledownload">
|
||||
<span class="component-title">Notify on subtitle download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="kodi_notify_onsubtitledownload" id="kodi_notify_onsubtitledownload" #if $sickbeard.KODI_NOTIFY_ONSUBTITLEDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="kodi_notify_onsubtitledownload" id="kodi_notify_onsubtitledownload" #if $sickbeard.KODI_NOTIFY_ONSUBTITLEDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when subtitles are downloaded ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -211,7 +211,7 @@
|
|||
<label for="kodi_update_library">
|
||||
<span class="component-title">Update library</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="kodi_update_library" id="kodi_update_library" #if $sickbeard.KODI_UPDATE_LIBRARY then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="kodi_update_library" id="kodi_update_library" #if $sickbeard.KODI_UPDATE_LIBRARY then 'checked="checked"' else ''# />
|
||||
<p>update Kodi library when a download finishes ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -220,7 +220,7 @@
|
|||
<label for="kodi_update_full">
|
||||
<span class="component-title">Full library update</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="kodi_update_full" id="kodi_update_full" #if $sickbeard.KODI_UPDATE_FULL then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="kodi_update_full" id="kodi_update_full" #if $sickbeard.KODI_UPDATE_FULL then 'checked="checked"' else ''# />
|
||||
<p>perform a full library update if update per-show fails ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -229,7 +229,7 @@
|
|||
<label for="kodi_update_onlyfirst">
|
||||
<span class="component-title">Only update first host</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="kodi_update_onlyfirst" id="kodi_update_onlyfirst" #if $sickbeard.KODI_UPDATE_ONLYFIRST then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="kodi_update_onlyfirst" id="kodi_update_onlyfirst" #if $sickbeard.KODI_UPDATE_ONLYFIRST then 'checked="checked"' else ''# />
|
||||
<p>only send library updates to the first active host ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -352,7 +352,7 @@
|
|||
<label for="plex_notify_onsnatch">
|
||||
<span class="component-title">Notify on snatch</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="plex_notify_onsnatch" id="plex_notify_onsnatch" #if $sickbeard.PLEX_NOTIFY_ONSNATCH then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="plex_notify_onsnatch" id="plex_notify_onsnatch" #if $sickbeard.PLEX_NOTIFY_ONSNATCH then 'checked="checked"' else ''# />
|
||||
<p>download start notification</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -361,7 +361,7 @@
|
|||
<label for="plex_notify_ondownload">
|
||||
<span class="component-title">Notify on download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="plex_notify_ondownload" id="plex_notify_ondownload" #if $sickbeard.PLEX_NOTIFY_ONDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="plex_notify_ondownload" id="plex_notify_ondownload" #if $sickbeard.PLEX_NOTIFY_ONDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>download finish notification</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -370,7 +370,7 @@
|
|||
<label for="plex_notify_onsubtitledownload">
|
||||
<span class="component-title">Notify on subtitle download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="plex_notify_onsubtitledownload" id="plex_notify_onsubtitledownload" #if $sickbeard.PLEX_NOTIFY_ONSUBTITLEDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="plex_notify_onsubtitledownload" id="plex_notify_onsubtitledownload" #if $sickbeard.PLEX_NOTIFY_ONSUBTITLEDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>subtitle downloaded notification</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -411,7 +411,7 @@
|
|||
<label for="use_nmj">
|
||||
<span class="component-title">Enable</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="use_nmj" id="use_nmj" #if $sickbeard.USE_NMJ then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="use_nmj" id="use_nmj" #if $sickbeard.USE_NMJ then 'checked="checked"' else ''# />
|
||||
<p>should SickGear send update commands to NMJ ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -441,7 +441,7 @@
|
|||
<div class="field-pair">
|
||||
<label for="nmj_database">
|
||||
<span class="component-title">NMJ database</span>
|
||||
<input type="text" name="nmj_database" id="nmj_database" value="$sickbeard.NMJ_DATABASE" class="form-control input-sm input250" #if $sickbeard.NMJ_DATABASE then "readonly=\"readonly\"" else ""# />
|
||||
<input type="text" name="nmj_database" id="nmj_database" value="$sickbeard.NMJ_DATABASE" class="form-control input-sm input250" #if $sickbeard.NMJ_DATABASE then "readonly=\"readonly\"" else ''# />
|
||||
</label>
|
||||
<label>
|
||||
<span class="component-title"> </span>
|
||||
|
@ -451,7 +451,7 @@
|
|||
<div class="field-pair">
|
||||
<label for="nmj_mount">
|
||||
<span class="component-title">NMJ mount url</span>
|
||||
<input type="text" name="nmj_mount" id="nmj_mount" value="$sickbeard.NMJ_MOUNT" class="form-control input-sm input250" #if $sickbeard.NMJ_MOUNT then "readonly=\"readonly\"" else ""# />
|
||||
<input type="text" name="nmj_mount" id="nmj_mount" value="$sickbeard.NMJ_MOUNT" class="form-control input-sm input250" #if $sickbeard.NMJ_MOUNT then "readonly=\"readonly\"" else ''# />
|
||||
</label>
|
||||
<label>
|
||||
<span class="component-title"> </span>
|
||||
|
@ -477,7 +477,7 @@
|
|||
<label for="use_nmjv2">
|
||||
<span class="component-title">Enable</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="use_nmjv2" id="use_nmjv2" #if $sickbeard.USE_NMJv2 then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="use_nmjv2" id="use_nmjv2" #if $sickbeard.USE_NMJv2 then 'checked="checked"' else ''# />
|
||||
<p>should SickGear send update commands to NMJv2 ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -498,10 +498,10 @@
|
|||
<span class="component-title">Database location</span>
|
||||
<span class="component-desc">
|
||||
<label for="NMJV2_DBLOC_A" class="space-right">
|
||||
<input type="radio" NAME="nmjv2_dbloc" VALUE="local" id="NMJV2_DBLOC_A" #if $sickbeard.NMJv2_DBLOC=="local" then "checked=\"checked\"" else ""# />PCH Local Media
|
||||
<input type="radio" NAME="nmjv2_dbloc" VALUE="local" id="NMJV2_DBLOC_A" #if $sickbeard.NMJv2_DBLOC=='local' then 'checked="checked"' else ''# />PCH Local Media
|
||||
</label>
|
||||
<label for="NMJV2_DBLOC_B">
|
||||
<input type="radio" NAME="nmjv2_dbloc" VALUE="network" id="NMJV2_DBLOC_B" #if $sickbeard.NMJv2_DBLOC=="network" then "checked=\"checked\"" else ""#/>PCH Network Media
|
||||
<input type="radio" NAME="nmjv2_dbloc" VALUE="network" id="NMJV2_DBLOC_B" #if $sickbeard.NMJv2_DBLOC=='network' then 'checked="checked"' else ''#/>PCH Network Media
|
||||
</label>
|
||||
</span>
|
||||
</div>
|
||||
|
@ -538,7 +538,7 @@
|
|||
<div class="field-pair">
|
||||
<label for="nmjv2_database">
|
||||
<span class="component-title">NMJv2 database</span>
|
||||
<input type="text" name="nmjv2_database" id="nmjv2_database" value="$sickbeard.NMJv2_DATABASE" class="form-control input-sm input250" #if $sickbeard.NMJv2_DATABASE then "readonly=\"readonly\"" else ""# />
|
||||
<input type="text" name="nmjv2_database" id="nmjv2_database" value="$sickbeard.NMJv2_DATABASE" class="form-control input-sm input250" #if $sickbeard.NMJv2_DATABASE then "readonly=\"readonly\"" else ''# />
|
||||
</label>
|
||||
<label>
|
||||
<span class="component-title"> </span>
|
||||
|
@ -567,7 +567,7 @@
|
|||
<label for="use_synoindex">
|
||||
<span class="component-title">Enable</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="use_synoindex" id="use_synoindex" #if $sickbeard.USE_SYNOINDEX then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="use_synoindex" id="use_synoindex" #if $sickbeard.USE_SYNOINDEX then 'checked="checked"' else ''# />
|
||||
<p>should SickGear send Synology notifications ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -597,7 +597,7 @@
|
|||
<label for="use_synologynotifier">
|
||||
<span class="component-title">Enable</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="use_synologynotifier" id="use_synologynotifier" #if $sickbeard.USE_SYNOLOGYNOTIFIER then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="use_synologynotifier" id="use_synologynotifier" #if $sickbeard.USE_SYNOLOGYNOTIFIER then 'checked="checked"' else ''# />
|
||||
<p>should SickGear send notifications to the Synology Notifier ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -611,7 +611,7 @@
|
|||
<label for="synologynotifier_notify_onsnatch">
|
||||
<span class="component-title">Notify on snatch</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="synologynotifier_notify_onsnatch" id="synologynotifier_notify_onsnatch" #if $sickbeard.SYNOLOGYNOTIFIER_NOTIFY_ONSNATCH then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="synologynotifier_notify_onsnatch" id="synologynotifier_notify_onsnatch" #if $sickbeard.SYNOLOGYNOTIFIER_NOTIFY_ONSNATCH then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download starts ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -620,7 +620,7 @@
|
|||
<label for="synologynotifier_notify_ondownload">
|
||||
<span class="component-title">Notify on download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="synologynotifier_notify_ondownload" id="synologynotifier_notify_ondownload" #if $sickbeard.SYNOLOGYNOTIFIER_NOTIFY_ONDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="synologynotifier_notify_ondownload" id="synologynotifier_notify_ondownload" #if $sickbeard.SYNOLOGYNOTIFIER_NOTIFY_ONDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download finishes ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -629,7 +629,7 @@
|
|||
<label for="synologynotifier_notify_onsubtitledownload">
|
||||
<span class="component-title">Notify on subtitle download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="synologynotifier_notify_onsubtitledownload" id="synologynotifier_notify_onsubtitledownload" #if $sickbeard.SYNOLOGYNOTIFIER_NOTIFY_ONSUBTITLEDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="synologynotifier_notify_onsubtitledownload" id="synologynotifier_notify_onsubtitledownload" #if $sickbeard.SYNOLOGYNOTIFIER_NOTIFY_ONSUBTITLEDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when subtitles are downloaded ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -651,7 +651,7 @@
|
|||
<label for="use_pytivo">
|
||||
<span class="component-title">Enable</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="use_pytivo" id="use_pytivo" #if $sickbeard.USE_PYTIVO then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="use_pytivo" id="use_pytivo" #if $sickbeard.USE_PYTIVO then 'checked="checked"' else ''# />
|
||||
<p>should SickGear send notifications to pyTivo ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -713,7 +713,7 @@
|
|||
<label for="use_growl">
|
||||
<span class="component-title">Enable</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="use_growl" id="use_growl" #if $sickbeard.USE_GROWL then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="use_growl" id="use_growl" #if $sickbeard.USE_GROWL then 'checked="checked"' else ''# />
|
||||
<p>should SickGear send Growl notifications ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -724,7 +724,7 @@
|
|||
<label for="growl_notify_onsnatch">
|
||||
<span class="component-title">Notify on snatch</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="growl_notify_onsnatch" id="growl_notify_onsnatch" #if $sickbeard.GROWL_NOTIFY_ONSNATCH then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="growl_notify_onsnatch" id="growl_notify_onsnatch" #if $sickbeard.GROWL_NOTIFY_ONSNATCH then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download starts ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -733,7 +733,7 @@
|
|||
<label for="growl_notify_ondownload">
|
||||
<span class="component-title">Notify on download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="growl_notify_ondownload" id="growl_notify_ondownload" #if $sickbeard.GROWL_NOTIFY_ONDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="growl_notify_ondownload" id="growl_notify_ondownload" #if $sickbeard.GROWL_NOTIFY_ONDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download finishes ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -742,7 +742,7 @@
|
|||
<label for="growl_notify_onsubtitledownload">
|
||||
<span class="component-title">Notify on subtitle download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="growl_notify_onsubtitledownload" id="growl_notify_onsubtitledownload" #if $sickbeard.GROWL_NOTIFY_ONSUBTITLEDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="growl_notify_onsubtitledownload" id="growl_notify_onsubtitledownload" #if $sickbeard.GROWL_NOTIFY_ONSUBTITLEDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when subtitles are downloaded ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -791,7 +791,7 @@
|
|||
<label for="use_prowl">
|
||||
<span class="component-title">Enable</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="use_prowl" id="use_prowl" #if $sickbeard.USE_PROWL then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="use_prowl" id="use_prowl" #if $sickbeard.USE_PROWL then 'checked="checked"' else ''# />
|
||||
<p>should SickGear send Prowl notifications ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -802,7 +802,7 @@
|
|||
<label for="prowl_notify_onsnatch">
|
||||
<span class="component-title">Notify on snatch</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="prowl_notify_onsnatch" id="prowl_notify_onsnatch" #if $sickbeard.PROWL_NOTIFY_ONSNATCH then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="prowl_notify_onsnatch" id="prowl_notify_onsnatch" #if $sickbeard.PROWL_NOTIFY_ONSNATCH then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download starts ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -811,7 +811,7 @@
|
|||
<label for="prowl_notify_ondownload">
|
||||
<span class="component-title">Notify on download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="prowl_notify_ondownload" id="prowl_notify_ondownload" #if $sickbeard.PROWL_NOTIFY_ONDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="prowl_notify_ondownload" id="prowl_notify_ondownload" #if $sickbeard.PROWL_NOTIFY_ONDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download finishes ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -820,7 +820,7 @@
|
|||
<label for="prowl_notify_onsubtitledownload">
|
||||
<span class="component-title">Notify on subtitle download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="prowl_notify_onsubtitledownload" id="prowl_notify_onsubtitledownload" #if $sickbeard.PROWL_NOTIFY_ONSUBTITLEDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="prowl_notify_onsubtitledownload" id="prowl_notify_onsubtitledownload" #if $sickbeard.PROWL_NOTIFY_ONSUBTITLEDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when subtitles are downloaded ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -839,11 +839,11 @@
|
|||
<label for="prowl_priority">
|
||||
<span class="component-title">Prowl priority:</span>
|
||||
<select id="prowl_priority" name="prowl_priority" class="form-control input-sm">
|
||||
<option value="-2" #if $sickbeard.PROWL_PRIORITY == "-2" then 'selected="selected"' else ""#>Very Low</option>
|
||||
<option value="-1" #if $sickbeard.PROWL_PRIORITY == "-1" then 'selected="selected"' else ""#>Moderate</option>
|
||||
<option value="0" #if $sickbeard.PROWL_PRIORITY == "0" then 'selected="selected"' else ""#>Normal</option>
|
||||
<option value="1" #if $sickbeard.PROWL_PRIORITY == "1" then 'selected="selected"' else ""#>High</option>
|
||||
<option value="2" #if $sickbeard.PROWL_PRIORITY == "2" then 'selected="selected"' else ""#>Emergency</option>
|
||||
<option value="-2" #if $sickbeard.PROWL_PRIORITY == '-2' then 'selected="selected"' else ''#>Very Low</option>
|
||||
<option value="-1" #if $sickbeard.PROWL_PRIORITY == '-1' then 'selected="selected"' else ''#>Moderate</option>
|
||||
<option value="0" #if $sickbeard.PROWL_PRIORITY == '0' then 'selected="selected"' else ''#>Normal</option>
|
||||
<option value="1" #if $sickbeard.PROWL_PRIORITY == '1' then 'selected="selected"' else ''#>High</option>
|
||||
<option value="2" #if $sickbeard.PROWL_PRIORITY == '2' then 'selected="selected"' else ''#>Emergency</option>
|
||||
</select>
|
||||
</label>
|
||||
<label>
|
||||
|
@ -871,7 +871,7 @@
|
|||
<label for="use_libnotify">
|
||||
<span class="component-title">Enable</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="use_libnotify" id="use_libnotify" #if $sickbeard.USE_LIBNOTIFY then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="use_libnotify" id="use_libnotify" #if $sickbeard.USE_LIBNOTIFY then 'checked="checked"' else ''# />
|
||||
<p>should SickGear send Libnotify notifications ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -882,7 +882,7 @@
|
|||
<label for="libnotify_notify_onsnatch">
|
||||
<span class="component-title">Notify on snatch</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="libnotify_notify_onsnatch" id="libnotify_notify_onsnatch" #if $sickbeard.LIBNOTIFY_NOTIFY_ONSNATCH then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="libnotify_notify_onsnatch" id="libnotify_notify_onsnatch" #if $sickbeard.LIBNOTIFY_NOTIFY_ONSNATCH then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download starts ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -891,7 +891,7 @@
|
|||
<label for="libnotify_notify_ondownload">
|
||||
<span class="component-title">Notify on download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="libnotify_notify_ondownload" id="libnotify_notify_ondownload" #if $sickbeard.LIBNOTIFY_NOTIFY_ONDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="libnotify_notify_ondownload" id="libnotify_notify_ondownload" #if $sickbeard.LIBNOTIFY_NOTIFY_ONDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download finishes ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -900,7 +900,7 @@
|
|||
<label for="libnotify_notify_onsubtitledownload">
|
||||
<span class="component-title">Notify on subtitle download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="libnotify_notify_onsubtitledownload" id="libnotify_notify_onsubtitledownload" #if $sickbeard.LIBNOTIFY_NOTIFY_ONSUBTITLEDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="libnotify_notify_onsubtitledownload" id="libnotify_notify_onsubtitledownload" #if $sickbeard.LIBNOTIFY_NOTIFY_ONSUBTITLEDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when subtitles are downloaded ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -925,7 +925,7 @@
|
|||
<label for="use_pushover">
|
||||
<span class="component-title">Enable</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="use_pushover" id="use_pushover" #if $sickbeard.USE_PUSHOVER then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="use_pushover" id="use_pushover" #if $sickbeard.USE_PUSHOVER then 'checked="checked"' else ''# />
|
||||
<p>should SickGear send Pushover notifications ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -936,7 +936,7 @@
|
|||
<label for="pushover_notify_onsnatch">
|
||||
<span class="component-title">Notify on snatch</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="pushover_notify_onsnatch" id="pushover_notify_onsnatch" #if $sickbeard.PUSHOVER_NOTIFY_ONSNATCH then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="pushover_notify_onsnatch" id="pushover_notify_onsnatch" #if $sickbeard.PUSHOVER_NOTIFY_ONSNATCH then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download starts ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -945,7 +945,7 @@
|
|||
<label for="pushover_notify_ondownload">
|
||||
<span class="component-title">Notify on download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="pushover_notify_ondownload" id="pushover_notify_ondownload" #if $sickbeard.PUSHOVER_NOTIFY_ONDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="pushover_notify_ondownload" id="pushover_notify_ondownload" #if $sickbeard.PUSHOVER_NOTIFY_ONDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download finishes ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -954,7 +954,7 @@
|
|||
<label for="pushover_notify_onsubtitledownload">
|
||||
<span class="component-title">Notify on subtitle download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="pushover_notify_onsubtitledownload" id="pushover_notify_onsubtitledownload" #if $sickbeard.PUSHOVER_NOTIFY_ONSUBTITLEDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="pushover_notify_onsubtitledownload" id="pushover_notify_onsubtitledownload" #if $sickbeard.PUSHOVER_NOTIFY_ONSUBTITLEDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when subtitles are downloaded ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -983,10 +983,10 @@
|
|||
<label for="pushover_priority">
|
||||
<span class="component-title">Pushover priority:</span>
|
||||
<select id="pushover_priority" name="pushover_priority" class="form-control input-sm">
|
||||
<option value="-2" #if $sickbeard.PUSHOVER_PRIORITY == -2 then 'selected="selected"' else ""#>Lowest</option>
|
||||
<option value="-1" #if $sickbeard.PUSHOVER_PRIORITY == -1 then 'selected="selected"' else ""#>Low</option>
|
||||
<option value="0" #if $sickbeard.PUSHOVER_PRIORITY == 0 then 'selected="selected"' else ""#>Normal</option>
|
||||
<option value="1" #if $sickbeard.PUSHOVER_PRIORITY == 1 then 'selected="selected"' else ""#>High</option>
|
||||
<option value="-2" #if $sickbeard.PUSHOVER_PRIORITY == '-2' then 'selected="selected"' else ''#>Lowest</option>
|
||||
<option value="-1" #if $sickbeard.PUSHOVER_PRIORITY == '-1' then 'selected="selected"' else ''#>Low</option>
|
||||
<option value="0" #if $sickbeard.PUSHOVER_PRIORITY == '0' then 'selected="selected"' else ''#>Normal</option>
|
||||
<option value="1" #if $sickbeard.PUSHOVER_PRIORITY == '1' then 'selected="selected"' else ''#>High</option>
|
||||
</select>
|
||||
</label>
|
||||
<label>
|
||||
|
@ -1015,28 +1015,28 @@
|
|||
<label for="pushover_sound">
|
||||
<span class="component-title">Pushover sound</span>
|
||||
<select id="pushover_sound" name="pushover_sound" class="form-control input-sm">
|
||||
<option value="pushover" #if $sickbeard.PUSHOVER_SOUND == "pushover" then 'selected="selected"' else ""#>Pushover (default)</option>
|
||||
<option value="bike" #if $sickbeard.PUSHOVER_SOUND == "bike" then 'selected="selected"' else ""#>Bike</option>
|
||||
<option value="bugle" #if $sickbeard.PUSHOVER_SOUND == "bugle" then 'selected="selected"' else ""#>Bugle</option>
|
||||
<option value="cashregister" #if $sickbeard.PUSHOVER_SOUND == "cashregister" then 'selected="selected"' else ""#>Cash Register</option>
|
||||
<option value="classical" #if $sickbeard.PUSHOVER_SOUND == "classical" then 'selected="selected"' else ""#>Classical</option>
|
||||
<option value="cosmic" #if $sickbeard.PUSHOVER_SOUND == "cosmic" then 'selected="selected"' else ""#>Cosmic</option>
|
||||
<option value="falling" #if $sickbeard.PUSHOVER_SOUND == "falling" then 'selected="selected"' else ""#>Falling</option>
|
||||
<option value="gamelan" #if $sickbeard.PUSHOVER_SOUND == "gamelan" then 'selected="selected"' else ""#>Gamelan</option>
|
||||
<option value="incoming" #if $sickbeard.PUSHOVER_SOUND == "incoming" then 'selected="selected"' else ""#>Incoming</option>
|
||||
<option value="intermission" #if $sickbeard.PUSHOVER_SOUND == "intermission" then 'selected="selected"' else ""#>Intermission</option>
|
||||
<option value="magic" #if $sickbeard.PUSHOVER_SOUND == "magic" then 'selected="selected"' else ""#>Magic</option>
|
||||
<option value="mechanical" #if $sickbeard.PUSHOVER_SOUND == "mechanical" then 'selected="selected"' else ""#>Mechanical</option>
|
||||
<option value="pianobar" #if $sickbeard.PUSHOVER_SOUND == "pianobar" then 'selected="selected"' else ""#>Piano Bar</option>
|
||||
<option value="siren" #if $sickbeard.PUSHOVER_SOUND == "siren" then 'selected="selected"' else ""#>Siren</option>
|
||||
<option value="spacealarm" #if $sickbeard.PUSHOVER_SOUND == "spacealarm" then 'selected="selected"' else ""#>Space Alarm</option>
|
||||
<option value="tugboat" #if $sickbeard.PUSHOVER_SOUND == "tugboat" then 'selected="selected"' else ""#>Tug Boat</option>
|
||||
<option value="alien" #if $sickbeard.PUSHOVER_SOUND == "alien" then 'selected="selected"' else ""#>Alien Alarm (long)</option>
|
||||
<option value="climb" #if $sickbeard.PUSHOVER_SOUND == "climb" then 'selected="selected"' else ""#>Climb (long)</option>
|
||||
<option value="persistent" #if $sickbeard.PUSHOVER_SOUND == "persistent" then 'selected="selected"' else ""#>Persistent (long)</option>
|
||||
<option value="echo" #if $sickbeard.PUSHOVER_SOUND == "echo" then 'selected="selected"' else ""#>Pushover Echo (long)</option>
|
||||
<option value="updown" #if $sickbeard.PUSHOVER_SOUND == "updown" then 'selected="selected"' else ""#>Up Down (long)</option>
|
||||
<option value="none" #if $sickbeard.PUSHOVER_SOUND == "none" then 'selected="selected"' else ""#>None (silent)</option>
|
||||
<option value="pushover" #if $sickbeard.PUSHOVER_SOUND == 'pushover' then 'selected="selected"' else ''#>Pushover (default)</option>
|
||||
<option value="bike" #if $sickbeard.PUSHOVER_SOUND == 'bike' then 'selected="selected"' else ''#>Bike</option>
|
||||
<option value="bugle" #if $sickbeard.PUSHOVER_SOUND == 'bugle' then 'selected="selected"' else ''#>Bugle</option>
|
||||
<option value="cashregister" #if $sickbeard.PUSHOVER_SOUND == 'cashregister' then 'selected="selected"' else ''#>Cash Register</option>
|
||||
<option value="classical" #if $sickbeard.PUSHOVER_SOUND == 'classical' then 'selected="selected"' else ''#>Classical</option>
|
||||
<option value="cosmic" #if $sickbeard.PUSHOVER_SOUND == 'cosmic' then 'selected="selected"' else ''#>Cosmic</option>
|
||||
<option value="falling" #if $sickbeard.PUSHOVER_SOUND == 'falling' then 'selected="selected"' else ''#>Falling</option>
|
||||
<option value="gamelan" #if $sickbeard.PUSHOVER_SOUND == 'gamelan' then 'selected="selected"' else ''#>Gamelan</option>
|
||||
<option value="incoming" #if $sickbeard.PUSHOVER_SOUND == 'incoming' then 'selected="selected"' else ''#>Incoming</option>
|
||||
<option value="intermission" #if $sickbeard.PUSHOVER_SOUND == 'intermission' then 'selected="selected"' else ''#>Intermission</option>
|
||||
<option value="magic" #if $sickbeard.PUSHOVER_SOUND == 'magic' then 'selected="selected"' else ''#>Magic</option>
|
||||
<option value="mechanical" #if $sickbeard.PUSHOVER_SOUND == 'mechanical' then 'selected="selected"' else ''#>Mechanical</option>
|
||||
<option value="pianobar" #if $sickbeard.PUSHOVER_SOUND == 'pianobar' then 'selected="selected"' else ''#>Piano Bar</option>
|
||||
<option value="siren" #if $sickbeard.PUSHOVER_SOUND == 'siren' then 'selected="selected"' else ''#>Siren</option>
|
||||
<option value="spacealarm" #if $sickbeard.PUSHOVER_SOUND == 'spacealarm' then 'selected="selected"' else ''#>Space Alarm</option>
|
||||
<option value="tugboat" #if $sickbeard.PUSHOVER_SOUND == 'tugboat' then 'selected="selected"' else ''#>Tug Boat</option>
|
||||
<option value="alien" #if $sickbeard.PUSHOVER_SOUND == 'alien' then 'selected="selected"' else ''#>Alien Alarm (long)</option>
|
||||
<option value="climb" #if $sickbeard.PUSHOVER_SOUND == 'climb' then 'selected="selected"' else ''#>Climb (long)</option>
|
||||
<option value="persistent" #if $sickbeard.PUSHOVER_SOUND == 'persistent' then 'selected="selected"' else ''#>Persistent (long)</option>
|
||||
<option value="echo" #if $sickbeard.PUSHOVER_SOUND == 'echo' then 'selected="selected"' else ''#>Pushover Echo (long)</option>
|
||||
<option value="updown" #if $sickbeard.PUSHOVER_SOUND == 'updown' then 'selected="selected"' else ''#>Up Down (long)</option>
|
||||
<option value="none" #if $sickbeard.PUSHOVER_SOUND == 'none' then 'selected="selected"' else ''#>None (silent)</option>
|
||||
</select>
|
||||
</label>
|
||||
<label>
|
||||
|
@ -1063,7 +1063,7 @@
|
|||
<label for="use_boxcar2">
|
||||
<span class="component-title">Enable</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="use_boxcar2" id="use_boxcar2" #if $sickbeard.USE_BOXCAR2 then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="use_boxcar2" id="use_boxcar2" #if $sickbeard.USE_BOXCAR2 then 'checked="checked"' else ''# />
|
||||
<p>should SickGear send Boxcar2 notifications ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1074,7 +1074,7 @@
|
|||
<label for="boxcar2_notify_onsnatch">
|
||||
<span class="component-title">Notify on snatch</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="boxcar2_notify_onsnatch" id="boxcar2_notify_onsnatch" #if $sickbeard.BOXCAR2_NOTIFY_ONSNATCH then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="boxcar2_notify_onsnatch" id="boxcar2_notify_onsnatch" #if $sickbeard.BOXCAR2_NOTIFY_ONSNATCH then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download starts ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1083,7 +1083,7 @@
|
|||
<label for="boxcar2_notify_ondownload">
|
||||
<span class="component-title">Notify on download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="boxcar2_notify_ondownload" id="boxcar2_notify_ondownload" #if $sickbeard.BOXCAR2_NOTIFY_ONDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="boxcar2_notify_ondownload" id="boxcar2_notify_ondownload" #if $sickbeard.BOXCAR2_NOTIFY_ONDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download finishes ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1092,7 +1092,7 @@
|
|||
<label for="boxcar2_notify_onsubtitledownload">
|
||||
<span class="component-title">Notify on subtitle download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="boxcar2_notify_onsubtitledownload" id="boxcar2_notify_onsubtitledownload" #if $sickbeard.BOXCAR2_NOTIFY_ONSUBTITLEDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="boxcar2_notify_onsubtitledownload" id="boxcar2_notify_onsubtitledownload" #if $sickbeard.BOXCAR2_NOTIFY_ONSUBTITLEDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when subtitles are downloaded ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1111,36 +1111,36 @@
|
|||
<label for="boxcar2_sound">
|
||||
<span class="component-title">Custom sound</span>
|
||||
<select id="boxcar2_sound" name="boxcar2_sound" class="form-control input-sm">
|
||||
<option value="default" #if $sickbeard.BOXCAR2_SOUND == "default" then 'selected="selected"' else ""#>Default (General)</option>
|
||||
<option value="no-sound" #if $sickbeard.BOXCAR2_SOUND == "no-sound" then 'selected="selected"' else ""#>Silent</option>
|
||||
<option value="beep-crisp" #if $sickbeard.BOXCAR2_SOUND == "beep-crisp" then 'selected="selected"' else ""#>Beep Crisp</option>
|
||||
<option value="beep-soft" #if $sickbeard.BOXCAR2_SOUND == "beep-soft" then 'selected="selected"' else ""#>Beep Soft</option>
|
||||
<option value="bell-modern" #if $sickbeard.BOXCAR2_SOUND == "bell-modern" then 'selected="selected"' else ""#>Bell Modern</option>
|
||||
<option value="bell-one-tone" #if $sickbeard.BOXCAR2_SOUND == "bell-one-tone" then 'selected="selected"' else ""#>Bell One Tone</option>
|
||||
<option value="bell-simple" #if $sickbeard.BOXCAR2_SOUND == "bell-simple" then 'selected="selected"' else ""#>Bell Simple</option>
|
||||
<option value="bell-triple" #if $sickbeard.BOXCAR2_SOUND == "bell-triple" then 'selected="selected"' else ""#>Bell Triple</option>
|
||||
<option value="bird-1" #if $sickbeard.BOXCAR2_SOUND == "bird-1" then 'selected="selected"' else ""#>Bird 1</option>
|
||||
<option value="bird-2" #if $sickbeard.BOXCAR2_SOUND == "bird-2" then 'selected="selected"' else ""#>Bird 2</option>
|
||||
<option value="boing" #if $sickbeard.BOXCAR2_SOUND == "boing" then 'selected="selected"' else ""#>Boing</option>
|
||||
<option value="cash" #if $sickbeard.BOXCAR2_SOUND == "cash" then 'selected="selected"' else ""#>Cash</option>
|
||||
<option value="clanging" #if $sickbeard.BOXCAR2_SOUND == "clanging" then 'selected="selected"' else ""#>Clanging</option>
|
||||
<option value="detonator-charge" #if $sickbeard.BOXCAR2_SOUND == "detonator-charge" then 'selected="selected"' else ""#>Detonator Charge</option>
|
||||
<option value="digital-alarm" #if $sickbeard.BOXCAR2_SOUND == "digital-alarm" then 'selected="selected"' else ""#>Digital Alarm</option>
|
||||
<option value="done" #if $sickbeard.BOXCAR2_SOUND == "done" then 'selected="selected"' else ""#>Done</option>
|
||||
<option value="echo" #if $sickbeard.BOXCAR2_SOUND == "echo" then 'selected="selected"' else ""#>Echo</option>
|
||||
<option value="flourish" #if $sickbeard.BOXCAR2_SOUND == "flourish" then 'selected="selected"' else ""#>Flourish</option>
|
||||
<option value="harp" #if $sickbeard.BOXCAR2_SOUND == "harp" then 'selected="selected"' else ""#>Harp</option>
|
||||
<option value="light" #if $sickbeard.BOXCAR2_SOUND == "light" then 'selected="selected"' else ""#>Light</option>
|
||||
<option value="magic-chime" #if $sickbeard.BOXCAR2_SOUND == "magic-chime" then 'selected="selected"' else ""#>Magic Chime</option>
|
||||
<option value="magic-coin" #if $sickbeard.BOXCAR2_SOUND == "magic-coin" then 'selected="selected"' else ""#>Magic Coin 1</option>
|
||||
<option value="notifier-1" #if $sickbeard.BOXCAR2_SOUND == "notifier-1" then 'selected="selected"' else ""#>Notifier 1</option>
|
||||
<option value="notifier-2" #if $sickbeard.BOXCAR2_SOUND == "notifier-2" then 'selected="selected"' else ""#>Notifier 2</option>
|
||||
<option value="notifier-3" #if $sickbeard.BOXCAR2_SOUND == "notifier-3" then 'selected="selected"' else ""#>Notifier 3</option>
|
||||
<option value="orchestral-long" #if $sickbeard.BOXCAR2_SOUND == "orchestral-long" then 'selected="selected"' else ""#>Orchestral Long</option>
|
||||
<option value="orchestral-short" #if $sickbeard.BOXCAR2_SOUND == "orchestral-short" then 'selected="selected"' else ""#>Orchestral Short</option>
|
||||
<option value="score" #if $sickbeard.BOXCAR2_SOUND == "score" then 'selected="selected"' else ""#>Score</option>
|
||||
<option value="success" #if $sickbeard.BOXCAR2_SOUND == "success" then 'selected="selected"' else ""#>Success</option>
|
||||
<option value="up" #if $sickbeard.BOXCAR2_SOUND == "up" then 'selected="selected"' else ""#>Up</option>
|
||||
<option value="default" #if $sickbeard.BOXCAR2_SOUND == 'default' then 'selected="selected"' else ''#>Default (General)</option>
|
||||
<option value="no-sound" #if $sickbeard.BOXCAR2_SOUND == 'no-sound' then 'selected="selected"' else ''#>Silent</option>
|
||||
<option value="beep-crisp" #if $sickbeard.BOXCAR2_SOUND == 'beep-crisp' then 'selected="selected"' else ''#>Beep Crisp</option>
|
||||
<option value="beep-soft" #if $sickbeard.BOXCAR2_SOUND == 'beep-soft' then 'selected="selected"' else ''#>Beep Soft</option>
|
||||
<option value="bell-modern" #if $sickbeard.BOXCAR2_SOUND == 'bell-modern' then 'selected="selected"' else ''#>Bell Modern</option>
|
||||
<option value="bell-one-tone" #if $sickbeard.BOXCAR2_SOUND == 'bell-one-tone' then 'selected="selected"' else ''#>Bell One Tone</option>
|
||||
<option value="bell-simple" #if $sickbeard.BOXCAR2_SOUND == 'bell-simple' then 'selected="selected"' else ''#>Bell Simple</option>
|
||||
<option value="bell-triple" #if $sickbeard.BOXCAR2_SOUND == 'bell-triple' then 'selected="selected"' else ''#>Bell Triple</option>
|
||||
<option value="bird-1" #if $sickbeard.BOXCAR2_SOUND == 'bird-1' then 'selected="selected"' else ''#>Bird 1</option>
|
||||
<option value="bird-2" #if $sickbeard.BOXCAR2_SOUND == 'bird-2' then 'selected="selected"' else ''#>Bird 2</option>
|
||||
<option value="boing" #if $sickbeard.BOXCAR2_SOUND == 'boing' then 'selected="selected"' else ''#>Boing</option>
|
||||
<option value="cash" #if $sickbeard.BOXCAR2_SOUND == 'cash' then 'selected="selected"' else ''#>Cash</option>
|
||||
<option value="clanging" #if $sickbeard.BOXCAR2_SOUND == 'clanging' then 'selected="selected"' else ''#>Clanging</option>
|
||||
<option value="detonator-charge" #if $sickbeard.BOXCAR2_SOUND == 'detonator-charge' then 'selected="selected"' else ''#>Detonator Charge</option>
|
||||
<option value="digital-alarm" #if $sickbeard.BOXCAR2_SOUND == 'digital-alarm' then 'selected="selected"' else ''#>Digital Alarm</option>
|
||||
<option value="done" #if $sickbeard.BOXCAR2_SOUND == 'done' then 'selected="selected"' else ''#>Done</option>
|
||||
<option value="echo" #if $sickbeard.BOXCAR2_SOUND == 'echo' then 'selected="selected"' else ''#>Echo</option>
|
||||
<option value="flourish" #if $sickbeard.BOXCAR2_SOUND == 'flourish' then 'selected="selected"' else ''#>Flourish</option>
|
||||
<option value="harp" #if $sickbeard.BOXCAR2_SOUND == 'harp' then 'selected="selected"' else ''#>Harp</option>
|
||||
<option value="light" #if $sickbeard.BOXCAR2_SOUND == 'light' then 'selected="selected"' else ''#>Light</option>
|
||||
<option value="magic-chime" #if $sickbeard.BOXCAR2_SOUND == 'magic-chime' then 'selected="selected"' else ''#>Magic Chime</option>
|
||||
<option value="magic-coin" #if $sickbeard.BOXCAR2_SOUND == 'magic-coin' then 'selected="selected"' else ''#>Magic Coin 1</option>
|
||||
<option value="notifier-1" #if $sickbeard.BOXCAR2_SOUND == 'notifier-1' then 'selected="selected"' else ''#>Notifier 1</option>
|
||||
<option value="notifier-2" #if $sickbeard.BOXCAR2_SOUND == 'notifier-2' then 'selected="selected"' else ''#>Notifier 2</option>
|
||||
<option value="notifier-3" #if $sickbeard.BOXCAR2_SOUND == 'notifier-3' then 'selected="selected"' else ''#>Notifier 3</option>
|
||||
<option value="orchestral-long" #if $sickbeard.BOXCAR2_SOUND == 'orchestral-long' then 'selected="selected"' else ''#>Orchestral Long</option>
|
||||
<option value="orchestral-short" #if $sickbeard.BOXCAR2_SOUND == 'orchestral-short' then 'selected="selected"' else ''#>Orchestral Short</option>
|
||||
<option value="score" #if $sickbeard.BOXCAR2_SOUND == 'score' then 'selected="selected"' else ''#>Score</option>
|
||||
<option value="success" #if $sickbeard.BOXCAR2_SOUND == 'success' then 'selected="selected"' else ''#>Success</option>
|
||||
<option value="up" #if $sickbeard.BOXCAR2_SOUND == 'up' then 'selected="selected"' else ''#>Up</option>
|
||||
</select>
|
||||
</label>
|
||||
<label>
|
||||
|
@ -1159,7 +1159,7 @@
|
|||
<div class="component-group">
|
||||
<div class="component-group-desc">
|
||||
<img class="notifier-icon" src="$sbRoot/images/notifiers/nma.png" alt="" title="NMA"/>
|
||||
<h3><a href="<%= anon_url('http://nma.usk.bz') %>" rel="noreferrer" onclick="window.open(this.href, '_blank'); return false;">Notify My Android</a></h3>
|
||||
<h3><a href="<%= anon_url('https://www.notifymyandroid.com') %>" rel="noreferrer" onclick="window.open(this.href, '_blank'); return false;">Notify My Android</a></h3>
|
||||
<p>Notify My Android is a Prowl-like Android App and API that offers an easy way to send notifications from your application directly to your Android device.</p>
|
||||
</div>
|
||||
<fieldset class="component-group-list">
|
||||
|
@ -1167,7 +1167,7 @@
|
|||
<label for="use_nma">
|
||||
<span class="component-title">Enable</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="use_nma" id="use_nma" #if $sickbeard.USE_NMA then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="use_nma" id="use_nma" #if $sickbeard.USE_NMA then 'checked="checked"' else ''# />
|
||||
<p>should SickGear send NMA notifications ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1178,7 +1178,7 @@
|
|||
<label for="nma_notify_onsnatch">
|
||||
<span class="component-title">Notify on snatch</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="nma_notify_onsnatch" id="nma_notify_onsnatch" #if $sickbeard.NMA_NOTIFY_ONSNATCH then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="nma_notify_onsnatch" id="nma_notify_onsnatch" #if $sickbeard.NMA_NOTIFY_ONSNATCH then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download starts ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1187,7 +1187,7 @@
|
|||
<label for="nma_notify_ondownload">
|
||||
<span class="component-title">Notify on download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="nma_notify_ondownload" id="nma_notify_ondownload" #if $sickbeard.NMA_NOTIFY_ONDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="nma_notify_ondownload" id="nma_notify_ondownload" #if $sickbeard.NMA_NOTIFY_ONDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download finishes ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1196,7 +1196,7 @@
|
|||
<label for="nma_notify_onsubtitledownload">
|
||||
<span class="component-title">Notify on subtitle download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="nma_notify_onsubtitledownload" id="nma_notify_onsubtitledownload" #if $sickbeard.NMA_NOTIFY_ONSUBTITLEDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="nma_notify_onsubtitledownload" id="nma_notify_onsubtitledownload" #if $sickbeard.NMA_NOTIFY_ONSUBTITLEDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when subtitles are downloaded ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1215,11 +1215,11 @@
|
|||
<label for="nma_priority">
|
||||
<span class="component-title">NMA priority:</span>
|
||||
<select id="nma_priority" name="nma_priority" class="form-control input-sm">
|
||||
<option value="-2" #if $sickbeard.NMA_PRIORITY == "-2" then 'selected="selected"' else ""#>Very Low</option>
|
||||
<option value="-1" #if $sickbeard.NMA_PRIORITY == "-1" then 'selected="selected"' else ""#>Moderate</option>
|
||||
<option value="0" #if $sickbeard.NMA_PRIORITY == "0" then 'selected="selected"' else ""#>Normal</option>
|
||||
<option value="1" #if $sickbeard.NMA_PRIORITY == "1" then 'selected="selected"' else ""#>High</option>
|
||||
<option value="2" #if $sickbeard.NMA_PRIORITY == "2" then 'selected="selected"' else ""#>Emergency</option>
|
||||
<option value="-2" #if $sickbeard.NMA_PRIORITY == '-2' then 'selected="selected"' else ''#>Very Low</option>
|
||||
<option value="-1" #if $sickbeard.NMA_PRIORITY == '-1' then 'selected="selected"' else ''#>Moderate</option>
|
||||
<option value="0" #if $sickbeard.NMA_PRIORITY == '0' then 'selected="selected"' else ''#>Normal</option>
|
||||
<option value="1" #if $sickbeard.NMA_PRIORITY == '1' then 'selected="selected"' else ''#>High</option>
|
||||
<option value="2" #if $sickbeard.NMA_PRIORITY == '2' then 'selected="selected"' else ''#>Emergency</option>
|
||||
</select>
|
||||
</label>
|
||||
<label>
|
||||
|
@ -1246,7 +1246,7 @@
|
|||
<label for="use_pushalot">
|
||||
<span class="component-title">Enable</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="use_pushalot" id="use_pushalot" #if $sickbeard.USE_PUSHALOT then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="use_pushalot" id="use_pushalot" #if $sickbeard.USE_PUSHALOT then 'checked="checked"' else ''# />
|
||||
<p>should SickGear send Pushalot notifications ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1257,7 +1257,7 @@
|
|||
<label for="pushalot_notify_onsnatch">
|
||||
<span class="component-title">Notify on snatch</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="pushalot_notify_onsnatch" id="pushalot_notify_onsnatch" #if $sickbeard.PUSHALOT_NOTIFY_ONSNATCH then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="pushalot_notify_onsnatch" id="pushalot_notify_onsnatch" #if $sickbeard.PUSHALOT_NOTIFY_ONSNATCH then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download starts ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1266,7 +1266,7 @@
|
|||
<label for="pushalot_notify_ondownload">
|
||||
<span class="component-title">Notify on download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="pushalot_notify_ondownload" id="pushalot_notify_ondownload" #if $sickbeard.PUSHALOT_NOTIFY_ONDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="pushalot_notify_ondownload" id="pushalot_notify_ondownload" #if $sickbeard.PUSHALOT_NOTIFY_ONDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download finishes ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1275,7 +1275,7 @@
|
|||
<label for="pushalot_notify_onsubtitledownload">
|
||||
<span class="component-title">Notify on subtitle download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="pushalot_notify_onsubtitledownload" id="pushalot_notify_onsubtitledownload" #if $sickbeard.PUSHALOT_NOTIFY_ONSUBTITLEDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="pushalot_notify_onsubtitledownload" id="pushalot_notify_onsubtitledownload" #if $sickbeard.PUSHALOT_NOTIFY_ONSUBTITLEDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when subtitles are downloaded ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1309,7 +1309,7 @@
|
|||
<label for="use_pushbullet">
|
||||
<span class="component-title">Enable</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="use_pushbullet" id="use_pushbullet" #if $sickbeard.USE_PUSHBULLET then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="use_pushbullet" id="use_pushbullet" #if $sickbeard.USE_PUSHBULLET then 'checked="checked"' else ''# />
|
||||
<p>should SickGear send Pushbullet notifications ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1320,7 +1320,7 @@
|
|||
<label for="pushbullet_notify_onsnatch">
|
||||
<span class="component-title">Notify on snatch</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="pushbullet_notify_onsnatch" id="pushbullet_notify_onsnatch" #if $sickbeard.PUSHBULLET_NOTIFY_ONSNATCH then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="pushbullet_notify_onsnatch" id="pushbullet_notify_onsnatch" #if $sickbeard.PUSHBULLET_NOTIFY_ONSNATCH then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download starts ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1329,7 +1329,7 @@
|
|||
<label for="pushbullet_notify_ondownload">
|
||||
<span class="component-title">Notify on download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="pushbullet_notify_ondownload" id="pushbullet_notify_ondownload" #if $sickbeard.PUSHBULLET_NOTIFY_ONDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="pushbullet_notify_ondownload" id="pushbullet_notify_ondownload" #if $sickbeard.PUSHBULLET_NOTIFY_ONDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download finishes ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1338,7 +1338,7 @@
|
|||
<label for="pushbullet_notify_onsubtitledownload">
|
||||
<span class="component-title">Notify on subtitle download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="pushbullet_notify_onsubtitledownload" id="pushbullet_notify_onsubtitledownload" #if $sickbeard.PUSHBULLET_NOTIFY_ONSUBTITLEDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="pushbullet_notify_onsubtitledownload" id="pushbullet_notify_onsubtitledownload" #if $sickbeard.PUSHBULLET_NOTIFY_ONSUBTITLEDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when subtitles are downloaded ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1388,7 +1388,7 @@
|
|||
<label for="use_twitter">
|
||||
<span class="component-title">Enable</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="use_twitter" id="use_twitter" #if $sickbeard.USE_TWITTER then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="use_twitter" id="use_twitter" #if $sickbeard.USE_TWITTER then 'checked="checked"' else ''# />
|
||||
<p>should SickGear post tweets on Twitter ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1403,7 +1403,7 @@
|
|||
<label for="twitter_notify_onsnatch">
|
||||
<span class="component-title">Notify on snatch</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="twitter_notify_onsnatch" id="twitter_notify_onsnatch" #if $sickbeard.TWITTER_NOTIFY_ONSNATCH then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="twitter_notify_onsnatch" id="twitter_notify_onsnatch" #if $sickbeard.TWITTER_NOTIFY_ONSNATCH then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download starts ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1412,7 +1412,7 @@
|
|||
<label for="twitter_notify_ondownload">
|
||||
<span class="component-title">Notify on download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="twitter_notify_ondownload" id="twitter_notify_ondownload" #if $sickbeard.TWITTER_NOTIFY_ONDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="twitter_notify_ondownload" id="twitter_notify_ondownload" #if $sickbeard.TWITTER_NOTIFY_ONDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download finishes ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1421,7 +1421,7 @@
|
|||
<label for="twitter_notify_onsubtitledownload">
|
||||
<span class="component-title">Notify on subtitle download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="twitter_notify_onsubtitledownload" id="twitter_notify_onsubtitledownload" #if $sickbeard.TWITTER_NOTIFY_ONSUBTITLEDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="twitter_notify_onsubtitledownload" id="twitter_notify_onsubtitledownload" #if $sickbeard.TWITTER_NOTIFY_ONSUBTITLEDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when subtitles are downloaded ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1472,7 +1472,7 @@
|
|||
<label for="use_trakt">
|
||||
<span class="component-title">Enable</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="use_trakt" id="use_trakt" #if $sickbeard.USE_TRAKT then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="use_trakt" id="use_trakt" #if $sickbeard.USE_TRAKT then 'checked="checked"' else ''# />
|
||||
<p>should SickGear send Trakt.tv notifications ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1515,7 +1515,7 @@
|
|||
<span class="component-desc">
|
||||
<select id="trakt_default_indexer" name="trakt_default_indexer" class="form-control input-sm">
|
||||
#for $indexer in $sickbeard.indexerApi().indexers
|
||||
<option value="$indexer" #if $indexer == $sickbeard.TRAKT_DEFAULT_INDEXER then "selected=\"selected\"" else ""#>$sickbeard.indexerApi().indexers[$indexer]</option>
|
||||
<option value="$indexer" #if $indexer == $sickbeard.TRAKT_DEFAULT_INDEXER then 'selected="selected"' else ''#>$sickbeard.indexerApi().indexers[$indexer]</option>
|
||||
#end for
|
||||
</select>
|
||||
</span>
|
||||
|
@ -1525,7 +1525,7 @@
|
|||
<label for="trakt_sync">
|
||||
<span class="component-title">Sync libraries:</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="trakt_sync" id="trakt_sync" #if $sickbeard.TRAKT_SYNC then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="trakt_sync" id="trakt_sync" #if $sickbeard.TRAKT_SYNC then 'checked="checked"' else ''# />
|
||||
<p>sync your SickGear show library with your trakt show library.</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1534,7 +1534,7 @@
|
|||
<label for="trakt_use_watchlist">
|
||||
<span class="component-title">Use watchlist:</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="trakt_use_watchlist" id="trakt_use_watchlist" #if $sickbeard.TRAKT_USE_WATCHLIST then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="trakt_use_watchlist" id="trakt_use_watchlist" #if $sickbeard.TRAKT_USE_WATCHLIST then 'checked="checked"' else ''# />
|
||||
<p>get new shows from your trakt watchlist.</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1544,9 +1544,9 @@
|
|||
<label for="trakt_method_add">
|
||||
<span class="component-title">Watchlist add method:</span>
|
||||
<select id="trakt_method_add" name="trakt_method_add" class="form-control input-sm">
|
||||
<option value="0" #if $sickbeard.TRAKT_METHOD_ADD == 0 then "selected=\"selected\"" else ""#>Skip All</option>
|
||||
<option value="1" #if $sickbeard.TRAKT_METHOD_ADD == 1 then "selected=\"selected\"" else ""#>Download Pilot Only</option>
|
||||
<option value="2" #if $sickbeard.TRAKT_METHOD_ADD == 2 then "selected=\"selected\"" else ""#>Get whole show</option>
|
||||
<option value="0" #if $sickbeard.TRAKT_METHOD_ADD == 0 then 'selected="selected"' else ''#>Skip All</option>
|
||||
<option value="1" #if $sickbeard.TRAKT_METHOD_ADD == 1 then 'selected="selected"' else ''#>Download Pilot Only</option>
|
||||
<option value="2" #if $sickbeard.TRAKT_METHOD_ADD == 2 then 'selected="selected"' else ''#>Get whole show</option>
|
||||
</select>
|
||||
</label>
|
||||
<label>
|
||||
|
@ -1558,7 +1558,7 @@
|
|||
<label for="trakt_remove_watchlist">
|
||||
<span class="component-title">Remove episode:</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="trakt_remove_watchlist" id="trakt_remove_watchlist" #if $sickbeard.TRAKT_REMOVE_WATCHLIST then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="trakt_remove_watchlist" id="trakt_remove_watchlist" #if $sickbeard.TRAKT_REMOVE_WATCHLIST then 'checked="checked"' else ''# />
|
||||
<p>remove an episode from your watchlist after it is downloaded.</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1567,7 +1567,7 @@
|
|||
<label for="trakt_remove_serieslist">
|
||||
<span class="component-title">Remove series:</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="trakt_remove_serieslist" id="trakt_remove_serieslist" #if $sickbeard.TRAKT_REMOVE_SERIESLIST then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="trakt_remove_serieslist" id="trakt_remove_serieslist" #if $sickbeard.TRAKT_REMOVE_SERIESLIST then 'checked="checked"' else ''# />
|
||||
<p>remove the whole series from your watchlist after any download.</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1576,7 +1576,7 @@
|
|||
<label for="trakt_start_paused">
|
||||
<span class="component-title">Start paused:</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="trakt_start_paused" id="trakt_start_paused" #if $sickbeard.TRAKT_START_PAUSED then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="trakt_start_paused" id="trakt_start_paused" #if $sickbeard.TRAKT_START_PAUSED then 'checked="checked"' else ''# />
|
||||
<p>show's grabbed from your trakt watchlist start paused.</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1600,7 +1600,7 @@
|
|||
<label for="use_email">
|
||||
<span class="component-title">Enable</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="use_email" id="use_email" #if $sickbeard.USE_EMAIL then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" class="enabler" name="use_email" id="use_email" #if $sickbeard.USE_EMAIL then 'checked="checked"' else ''# />
|
||||
<p>should SickGear send email notifications ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1611,7 +1611,7 @@
|
|||
<label for="email_notify_onsnatch">
|
||||
<span class="component-title">Notify on snatch</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="email_notify_onsnatch" id="email_notify_onsnatch" #if $sickbeard.EMAIL_NOTIFY_ONSNATCH then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="email_notify_onsnatch" id="email_notify_onsnatch" #if $sickbeard.EMAIL_NOTIFY_ONSNATCH then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download starts ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1620,7 +1620,7 @@
|
|||
<label for="email_notify_ondownload">
|
||||
<span class="component-title">Notify on download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="email_notify_ondownload" id="email_notify_ondownload" #if $sickbeard.EMAIL_NOTIFY_ONDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="email_notify_ondownload" id="email_notify_ondownload" #if $sickbeard.EMAIL_NOTIFY_ONDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when a download finishes ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1629,7 +1629,7 @@
|
|||
<label for="email_notify_onsubtitledownload">
|
||||
<span class="component-title">Notify on subtitle download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="email_notify_onsubtitledownload" id="email_notify_onsubtitledownload" #if $sickbeard.EMAIL_NOTIFY_ONSUBTITLEDOWNLOAD then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="email_notify_onsubtitledownload" id="email_notify_onsubtitledownload" #if $sickbeard.EMAIL_NOTIFY_ONSUBTITLEDOWNLOAD then 'checked="checked"' else ''# />
|
||||
<p>send a notification when subtitles are downloaded ?</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -1668,7 +1668,7 @@
|
|||
<label for="email_tls">
|
||||
<span class="component-title">Use TLS</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="email_tls" id="email_tls" #if $sickbeard.EMAIL_TLS then "checked=\"checked\"" else ""# />
|
||||
<input type="checkbox" name="email_tls" id="email_tls" #if $sickbeard.EMAIL_TLS then 'checked="checked"' else ''# />
|
||||
<p>check to use TLS encryption.</p>
|
||||
</span>
|
||||
</label>
|
||||
|
|
|
@ -3,8 +3,8 @@
|
|||
#from sickbeard.providers import thepiratebay
|
||||
#from sickbeard.helpers import anon_url, starify
|
||||
##
|
||||
#set global $title="Config - Providers"
|
||||
#set global $header="Search Providers"
|
||||
#set global $title = 'Config - Providers'
|
||||
#set global $header = 'Search Providers'
|
||||
#set global $sbPath = '../..'
|
||||
#set global $topmenu = 'config'
|
||||
##
|
||||
|
@ -39,7 +39,7 @@
|
|||
|
||||
#for $curNewznabProvider in $sickbeard.newznabProviderList:
|
||||
|
||||
\$(this).addProvider('$curNewznabProvider.getID()', '$curNewznabProvider.name', '$curNewznabProvider.url', '<%= starify(curNewznabProvider.key) %>', '$curNewznabProvider.catIDs', $int($curNewznabProvider.default), show_nzb_providers);
|
||||
\$(this).addProvider('$curNewznabProvider.get_id()', '$curNewznabProvider.name', '$curNewznabProvider.url', '<%= starify(curNewznabProvider.key) %>', '$curNewznabProvider.cat_ids', $int($curNewznabProvider.default), show_nzb_providers);
|
||||
|
||||
#end for
|
||||
|
||||
|
@ -49,7 +49,7 @@
|
|||
|
||||
#for $curTorrentRssProvider in $sickbeard.torrentRssProviderList:
|
||||
|
||||
\$(this).addTorrentRssProvider('$curTorrentRssProvider.getID()', '$curTorrentRssProvider.name', '$curTorrentRssProvider.url', '<%= starify(curTorrentRssProvider.cookies) %>');
|
||||
\$(this).addTorrentRssProvider('$curTorrentRssProvider.get_id()', '$curTorrentRssProvider.name', '$curTorrentRssProvider.url', '<%= starify(curTorrentRssProvider.cookies) %>');
|
||||
|
||||
#end for
|
||||
|
||||
|
@ -90,7 +90,7 @@
|
|||
<p>At least one provider is required but two are recommended.</p>
|
||||
|
||||
#if $methods_notused
|
||||
<blockquote style="margin: 20px 0"><%= '/'.join(x for x in methods_notused) %> providers can be enabled in <a href="$sbRoot/config/search/">Search Settings</a></blockquote>
|
||||
<blockquote style="margin:20px 0"><%= '/'.join(x for x in methods_notused) %> providers can be enabled in <a href="$sbRoot/config/search/">Search Settings</a></blockquote>
|
||||
#else
|
||||
<br/>
|
||||
#end if
|
||||
|
@ -104,13 +104,12 @@
|
|||
#elif $curProvider.providerType == $GenericProvider.TORRENT and not $sickbeard.USE_TORRENTS
|
||||
#continue
|
||||
#end if
|
||||
#set $curName = $curProvider.getID()
|
||||
#set $curName = $curProvider.get_id()
|
||||
<li class="ui-state-default" id="$curName">
|
||||
<input type="checkbox" id="enable_$curName" class="provider_enabler" <%= html_checked if curProvider.isEnabled() else '' %>/>
|
||||
<a href="<%= anon_url(curProvider.url) %>" class="imgLink" rel="noreferrer" onclick="window.open(this.href, '_blank'); return false;"><img src="$sbRoot/images/providers/$curProvider.imageName()" alt="$curProvider.name" title="$curProvider.name" width="16" height="16" style="vertical-align:middle;"/></a>
|
||||
<span style="vertical-align:middle;">$curProvider.name</span>
|
||||
<input type="checkbox" id="enable_$curName" class="provider_enabler" <%= html_checked if curProvider.is_enabled() else '' %>/>
|
||||
<a href="<%= anon_url(curProvider.url) %>" class="imgLink" rel="noreferrer" onclick="window.open(this.href, '_blank'); return false;"><img src="$sbRoot/images/providers/$curProvider.image_name()" alt="$curProvider.name" title="$curProvider.name" width="16" height="16" style="vertical-align:middle;"/></a>
|
||||
<span style="vertical-align:middle">$curProvider.name</span>
|
||||
<%= '*' if not curProvider.supportsBacklog else '' %>
|
||||
<%= '**' if 'EZRSS' == curProvider.name else '' %>
|
||||
<span class="ui-icon ui-icon-arrowthick-2-n-s pull-right" style="margin-top:3px"></span>
|
||||
</li>
|
||||
#end for
|
||||
|
@ -125,7 +124,7 @@
|
|||
##<h4 class="note">!</h4><p class="note">Provider is <b>NOT WORKING</b></p>
|
||||
</div>
|
||||
|
||||
<input type="hidden" name="provider_order" id="provider_order" value="<%=" ".join([x.getID()+':'+str(int(x.isEnabled())) for x in sickbeard.providers.sortedProviderList()])%>"/>
|
||||
<input type="hidden" name="provider_order" id="provider_order" value="<%=' '.join([x.get_id()+':'+str(int(x.is_enabled())) for x in sickbeard.providers.sortedProviderList()])%>"/>
|
||||
<div style="width: 300px; float: right">
|
||||
<div style="margin: 0px auto; width: 101px">
|
||||
<input type="submit" class="btn config_submitter" value="Save Changes" />
|
||||
|
@ -157,7 +156,7 @@
|
|||
#elif $curProvider.providerType == $GenericProvider.TORRENT and not $sickbeard.USE_TORRENTS
|
||||
#continue
|
||||
#end if
|
||||
#if $curProvider.isEnabled()
|
||||
#if $curProvider.is_enabled()
|
||||
$provider_config_list_enabled.append($curProvider)
|
||||
#else
|
||||
$provider_config_list.append($curProvider)
|
||||
|
@ -169,14 +168,14 @@
|
|||
#if $provider_config_list_enabled
|
||||
<optgroup label="Enabled...">
|
||||
#for $cur_provider in $provider_config_list_enabled:
|
||||
<option value="$cur_provider.getID()">$cur_provider.name</option>
|
||||
<option value="$cur_provider.get_id()">$cur_provider.name</option>
|
||||
#end for
|
||||
</optgroup>
|
||||
#end if
|
||||
#if $provider_config_list
|
||||
<optgroup label="Not Enabled...">
|
||||
#for $cur_provider in $provider_config_list
|
||||
<option value="$cur_provider.getID()">$cur_provider.name</option>
|
||||
<option value="$cur_provider.get_id()">$cur_provider.name</option>
|
||||
#end for
|
||||
</optgroup>
|
||||
#end if
|
||||
|
@ -188,76 +187,71 @@
|
|||
</label>
|
||||
</div>
|
||||
|
||||
|
||||
<!-- start div for editing providers //-->
|
||||
#for $curNewznabProvider in [$curProvider for $curProvider in $sickbeard.newznabProviderList]
|
||||
<div class="providerDiv" id="${curNewznabProvider.getID()}Div">
|
||||
<div class="providerDiv" id="${curNewznabProvider.get_id()}Div">
|
||||
#if $curNewznabProvider.default and $curNewznabProvider.needs_auth
|
||||
<div class="field-pair">
|
||||
<label for="${curNewznabProvider.getID()}_url">
|
||||
<label for="${curNewznabProvider.get_id()}_url">
|
||||
<span class="component-title">URL</span>
|
||||
<span class="component-desc">
|
||||
<input type="text" id="${curNewznabProvider.getID()}_url" value="$curNewznabProvider.url" class="form-control input-sm input350" disabled/>
|
||||
<input type="text" id="${curNewznabProvider.get_id()}_url" value="$curNewznabProvider.url" class="form-control input-sm input350" disabled/>
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
|
||||
<div class="field-pair">
|
||||
<label for="${curNewznabProvider.getID()}_hash">
|
||||
<label for="${curNewznabProvider.get_id()}_hash">
|
||||
<span class="component-title">API key</span>
|
||||
<span class="component-desc">
|
||||
<input type="text" id="${curNewznabProvider.getID()}_hash" value="<%= starify(curNewznabProvider.key) %>" newznab_name="${curNewznabProvider.getID()}_hash" class="newznab_key form-control input-sm input350" />
|
||||
<input type="text" id="${curNewznabProvider.get_id()}_hash" value="<%= starify(curNewznabProvider.key) %>" newznab_name="${curNewznabProvider.get_id()}_hash" class="newznab_key form-control input-sm input350" />
|
||||
<div class="clear-left"><p>get API key from provider website</p></div>
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curNewznabProvider, 'enable_recentsearch'):
|
||||
#if $hasattr($curNewznabProvider, 'enable_recentsearch') and $curNewznabProvider.supportsBacklog:
|
||||
<div class="field-pair">
|
||||
<label for="${curNewznabProvider.getID()}_enable_recentsearch">
|
||||
<label for="${curNewznabProvider.get_id()}_enable_recentsearch">
|
||||
<span class="component-title">Enable recent searches</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="${curNewznabProvider.getID()}_enable_recentsearch" id="${curNewznabProvider.getID()}_enable_recentsearch" <%= html_checked if curNewznabProvider.enable_recentsearch else '' %>/>
|
||||
<input type="checkbox" name="${curNewznabProvider.get_id()}_enable_recentsearch" id="${curNewznabProvider.get_id()}_enable_recentsearch" <%= html_checked if curNewznabProvider.enable_recentsearch else '' %>/>
|
||||
<p>perform recent searches at provider</p>
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curNewznabProvider, 'enable_backlog'):
|
||||
#if $hasattr($curNewznabProvider, 'enable_backlog') and $curNewznabProvider.supportsBacklog:
|
||||
<div class="field-pair">
|
||||
<label for="${curNewznabProvider.getID()}_enable_backlog">
|
||||
<label for="${curNewznabProvider.get_id()}_enable_backlog">
|
||||
<span class="component-title">Enable backlog searches</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="${curNewznabProvider.getID()}_enable_backlog" id="${curNewznabProvider.getID()}_enable_backlog" <%= html_checked if curNewznabProvider.enable_backlog else '' %>/>
|
||||
<input type="checkbox" name="${curNewznabProvider.get_id()}_enable_backlog" id="${curNewznabProvider.get_id()}_enable_backlog" <%= html_checked if curNewznabProvider.enable_backlog else '' %>/>
|
||||
<p>perform backlog searches at provider</p>
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curNewznabProvider, 'search_mode'):
|
||||
#if $hasattr($curNewznabProvider, 'search_mode') and $curNewznabProvider.supportsBacklog:
|
||||
<div class="field-pair">
|
||||
<span class="component-title">Season search mode</span>
|
||||
<span class="component-desc">
|
||||
<label class="space-right">
|
||||
<input type="radio" name="${curNewznabProvider.getID()}_search_mode" id="${curNewznabProvider.getID()}_search_mode_sponly" value="sponly" <%= html_checked if 'sponly' == curNewznabProvider.search_mode else '' %>/>season packs only
|
||||
<input type="radio" name="${curNewznabProvider.get_id()}_search_mode" id="${curNewznabProvider.get_id()}_search_mode_sponly" value="sponly" <%= html_checked if 'sponly' == curNewznabProvider.search_mode else '' %>/>season packs only
|
||||
</label>
|
||||
<label>
|
||||
<input type="radio" name="${curNewznabProvider.getID()}_search_mode" id="${curNewznabProvider.getID()}_search_mode_eponly" value="eponly" <%= html_checked if 'eponly' == curNewznabProvider.search_mode else '' %>/>episodes only
|
||||
<input type="radio" name="${curNewznabProvider.get_id()}_search_mode" id="${curNewznabProvider.get_id()}_search_mode_eponly" value="eponly" <%= html_checked if 'eponly' == curNewznabProvider.search_mode else '' %>/>episodes only
|
||||
</label>
|
||||
<p>when searching for complete seasons, search for packs or collect single episodes</p>
|
||||
</span>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curNewznabProvider, 'search_fallback'):
|
||||
#if $hasattr($curNewznabProvider, 'search_fallback') and $curNewznabProvider.supportsBacklog:
|
||||
<div class="field-pair">
|
||||
<label for="${curNewznabProvider.getID()}_search_fallback">
|
||||
<label for="${curNewznabProvider.get_id()}_search_fallback">
|
||||
<span class="component-title">Season search fallback</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="${curNewznabProvider.getID()}_search_fallback" id="${curNewznabProvider.getID()}_search_fallback" <%= html_checked if curNewznabProvider.search_fallback else '' %>/>
|
||||
<input type="checkbox" name="${curNewznabProvider.get_id()}_search_fallback" id="${curNewznabProvider.get_id()}_search_fallback" <%= html_checked if curNewznabProvider.search_fallback else '' %>/>
|
||||
<p>run the alternate season search mode when a complete season is not found</p>
|
||||
</span>
|
||||
</label>
|
||||
|
@ -265,218 +259,205 @@
|
|||
#end if
|
||||
</div>
|
||||
#end for
|
||||
##
|
||||
|
||||
##
|
||||
#for $curNzbProvider in [$curProvider for $curProvider in $sickbeard.providers.sortedProviderList() if $curProvider.providerType == $GenericProvider.NZB and $curProvider not in $sickbeard.newznabProviderList]:
|
||||
<div class="providerDiv" id="${curNzbProvider.getID()}Div">
|
||||
<div class="providerDiv" id="${curNzbProvider.get_id()}Div">
|
||||
#if $hasattr($curNzbProvider, 'username'):
|
||||
<div class="field-pair">
|
||||
<label for="${curNzbProvider.getID()}_username">
|
||||
<label for="${curNzbProvider.get_id()}_username">
|
||||
<span class="component-title">Username</span>
|
||||
<span class="component-desc">
|
||||
<input type="text" name="${curNzbProvider.getID()}_username" value="$curNzbProvider.username" class="form-control input-sm input350" />
|
||||
<input type="text" name="${curNzbProvider.get_id()}_username" value="$curNzbProvider.username" class="form-control input-sm input350" />
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curNzbProvider, 'api_key'):
|
||||
<div class="field-pair">
|
||||
<label for="${curNzbProvider.getID()}_api_key">
|
||||
<label for="${curNzbProvider.get_id()}_api_key">
|
||||
<span class="component-title">API key</span>
|
||||
<span class="component-desc">
|
||||
<input type="text" name="${curNzbProvider.getID()}_api_key" value="<%= starify(curNzbProvider.api_key) %>" class="form-control input-sm input350" />
|
||||
#set $field_name = curNzbProvider.get_id() + '_api_key'
|
||||
<input type="text" name="$field_name" value="<%= starify(curNzbProvider.api_key) %>" class="form-control input-sm input350" />
|
||||
#if callable(getattr(curNzbProvider, 'ui_string'))
|
||||
<div class="clear-left"><p>${curNzbProvider.ui_string($field_name)}</p></div>
|
||||
#end if
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
|
||||
#if $hasattr($curNzbProvider, 'enable_recentsearch'):
|
||||
#if $hasattr($curNzbProvider, 'enable_recentsearch') and $curNzbProvider.supportsBacklog:
|
||||
<div class="field-pair">
|
||||
<label for="${curNzbProvider.getID()}_enable_recentsearch">
|
||||
<label for="${curNzbProvider.get_id()}_enable_recentsearch">
|
||||
<span class="component-title">Enable recent searches</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="${curNzbProvider.getID()}_enable_recentsearch" id="${curNzbProvider.getID()}_enable_recentsearch" <%= html_checked if curNzbProvider.enable_recentsearch else '' %>/>
|
||||
<input type="checkbox" name="${curNzbProvider.get_id()}_enable_recentsearch" id="${curNzbProvider.get_id()}_enable_recentsearch" <%= html_checked if curNzbProvider.enable_recentsearch else '' %>/>
|
||||
<p>enable provider to perform recent searches.</p>
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curNzbProvider, 'enable_backlog'):
|
||||
#if $hasattr($curNzbProvider, 'enable_backlog') and $curNzbProvider.supportsBacklog:
|
||||
<div class="field-pair">
|
||||
<label for="${curNzbProvider.getID()}_enable_backlog">
|
||||
<label for="${curNzbProvider.get_id()}_enable_backlog">
|
||||
<span class="component-title">Enable backlog searches</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="${curNzbProvider.getID()}_enable_backlog" id="${curNzbProvider.getID()}_enable_backlog" <%= html_checked if curNzbProvider.enable_backlog else '' %>/>
|
||||
<input type="checkbox" name="${curNzbProvider.get_id()}_enable_backlog" id="${curNzbProvider.get_id()}_enable_backlog" <%= html_checked if curNzbProvider.enable_backlog else '' %>/>
|
||||
<p>enable provider to perform backlog searches.</p>
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curNzbProvider, 'search_fallback'):
|
||||
#if $hasattr($curNzbProvider, 'search_mode') and $curNzbProvider.supportsBacklog:
|
||||
<div class="field-pair">
|
||||
<label for="${curNzbProvider.getID()}_search_fallback">
|
||||
<span class="component-title">Season search mode</span>
|
||||
<span class="component-desc">
|
||||
<label class="space-right">
|
||||
<input type="radio" name="${curNzbProvider.get_id()}_search_mode" id="${curNzbProvider.get_id()}_search_mode_sponly" value="sponly" <%= html_checked if 'sponly' == curNzbProvider.search_mode else '' %>/>season packs only
|
||||
</label>
|
||||
<label>
|
||||
<input type="radio" name="${curNzbProvider.get_id()}_search_mode" id="${curNzbProvider.get_id()}_search_mode_eponly" value="eponly" <%= html_checked if 'eponly' == curNzbProvider.search_mode else '' %>/>episodes only
|
||||
</label>
|
||||
<p>when searching for complete seasons, search for packs or collect single episodes</p>
|
||||
</span>
|
||||
</div>
|
||||
#end if
|
||||
#if $hasattr($curNzbProvider, 'search_fallback') and $curNzbProvider.supportsBacklog:
|
||||
<div class="field-pair">
|
||||
<label for="${curNzbProvider.get_id()}_search_fallback">
|
||||
<span class="component-title">Season search fallback</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="${curNzbProvider.getID()}_search_fallback" id="${curNzbProvider.getID()}_search_fallback" <%= html_checked if curNzbProvider.search_fallback else '' %>/>
|
||||
<p>when searching for a complete season depending on search mode you may return no results, this helps by restarting the search using the opposite search mode.</p>
|
||||
<input type="checkbox" name="${curNzbProvider.get_id()}_search_fallback" id="${curNzbProvider.get_id()}_search_fallback" <%= html_checked if curNzbProvider.search_fallback else '' %>/>
|
||||
<p>run the alternate season search mode when a complete season is not found</p>
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curNzbProvider, 'search_mode'):
|
||||
#if not $curNzbProvider.supportsBacklog:
|
||||
<div class="field-pair">
|
||||
<label>
|
||||
<span class="component-title">Season search mode</span>
|
||||
<span class="component-desc">
|
||||
<p>when searching for complete seasons you can choose to have it look for season packs only, or choose to have it build a complete season from just single episodes.</p>
|
||||
</span>
|
||||
</label>
|
||||
<label>
|
||||
<span class="component-title"></span>
|
||||
<span class="component-desc">
|
||||
<input type="radio" name="${curNzbProvider.getID()}_search_mode" id="${curNzbProvider.getID()}_search_mode_sponly" value="sponly" <%= html_checked if 'sponly' == curNzbProvider.search_mode else '' %>/>season packs only.
|
||||
</span>
|
||||
</label>
|
||||
<label>
|
||||
<span class="component-title"></span>
|
||||
<span class="component-desc">
|
||||
<input type="radio" name="${curNzbProvider.getID()}_search_mode" id="${curNzbProvider.getID()}_search_mode_eponly" value="eponly" <%= html_checked if 'eponly' == curNzbProvider.search_mode else '' %>/>episodes only.
|
||||
</span>
|
||||
</label>
|
||||
<span class="component-desc">The latest releases are the focus of this provider, no backlog searching</span>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
</div>
|
||||
#end for
|
||||
##
|
||||
|
||||
##
|
||||
#for $curTorrentProvider in [$curProvider for $curProvider in $sickbeard.providers.sortedProviderList() if $curProvider.providerType == $GenericProvider.TORRENT]:
|
||||
<div class="providerDiv" id="${curTorrentProvider.getID()}Div">
|
||||
<div class="providerDiv" id="${curTorrentProvider.get_id()}Div">
|
||||
#if $hasattr($curTorrentProvider, 'api_key'):
|
||||
<div class="field-pair">
|
||||
<label for="${curTorrentProvider.getID()}_api_key">
|
||||
<label for="${curTorrentProvider.get_id()}_api_key">
|
||||
<span class="component-title">Api key:</span>
|
||||
<span class="component-desc">
|
||||
<input type="text" name="${curTorrentProvider.getID()}_api_key" id="${curTorrentProvider.getID()}_api_key" value="<%= starify(curTorrentProvider.api_key) %>" class="form-control input-sm input350" />
|
||||
<input type="text" name="${curTorrentProvider.get_id()}_api_key" id="${curTorrentProvider.get_id()}_api_key" value="<%= starify(curTorrentProvider.api_key) %>" class="form-control input-sm input350" />
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curTorrentProvider, 'digest'):
|
||||
<div class="field-pair">
|
||||
<label for="${curTorrentProvider.getID()}_digest">
|
||||
<label for="${curTorrentProvider.get_id()}_digest">
|
||||
<span class="component-title">Digest:</span>
|
||||
<span class="component-desc">
|
||||
<input type="text" name="${curTorrentProvider.getID()}_digest" id="${curTorrentProvider.getID()}_digest" value="$curTorrentProvider.digest" class="form-control input-sm input350" />
|
||||
<input type="text" name="${curTorrentProvider.get_id()}_digest" id="${curTorrentProvider.get_id()}_digest" value="$curTorrentProvider.digest" class="form-control input-sm input350" />
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curTorrentProvider, 'hash'):
|
||||
<div class="field-pair">
|
||||
<label for="${curTorrentProvider.getID()}_hash">
|
||||
<label for="${curTorrentProvider.get_id()}_hash">
|
||||
<span class="component-title">Hash:</span>
|
||||
<span class="component-desc">
|
||||
<input type="text" name="${curTorrentProvider.getID()}_hash" id="${curTorrentProvider.getID()}_hash" value="$curTorrentProvider.hash" class="form-control input-sm input350" />
|
||||
<input type="text" name="${curTorrentProvider.get_id()}_hash" id="${curTorrentProvider.get_id()}_hash" value="$curTorrentProvider.hash" class="form-control input-sm input350" />
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curTorrentProvider, 'username'):
|
||||
<div class="field-pair">
|
||||
<label for="${curTorrentProvider.getID()}_username">
|
||||
<label for="${curTorrentProvider.get_id()}_username">
|
||||
<span class="component-title">Username:</span>
|
||||
<span class="component-desc">
|
||||
<input type="text" name="${curTorrentProvider.getID()}_username" id="${curTorrentProvider.getID()}_username" value="$curTorrentProvider.username" class="form-control input-sm input350" />
|
||||
<input type="text" name="${curTorrentProvider.get_id()}_username" id="${curTorrentProvider.get_id()}_username" value="$curTorrentProvider.username" class="form-control input-sm input350" />
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curTorrentProvider, 'password'):
|
||||
<div class="field-pair">
|
||||
<label for="${curTorrentProvider.getID()}_password">
|
||||
<label for="${curTorrentProvider.get_id()}_password">
|
||||
<span class="component-title">Password:</span>
|
||||
<span class="component-desc">
|
||||
<input type="password" name="${curTorrentProvider.getID()}_password" id="${curTorrentProvider.getID()}_password" value="#echo '*' * len($curTorrentProvider.password)#" class="form-control input-sm input350" />
|
||||
<input type="password" name="${curTorrentProvider.get_id()}_password" id="${curTorrentProvider.get_id()}_password" value="#echo '*' * len($curTorrentProvider.password)#" class="form-control input-sm input350" />
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curTorrentProvider, 'passkey'):
|
||||
<div class="field-pair">
|
||||
<label for="${curTorrentProvider.getID()}_passkey">
|
||||
<label for="${curTorrentProvider.get_id()}_passkey">
|
||||
<span class="component-title">Passkey:</span>
|
||||
<span class="component-desc">
|
||||
<input type="text" name="${curTorrentProvider.getID()}_passkey" id="${curTorrentProvider.getID()}_passkey" value="<%= starify(curTorrentProvider.passkey) %>" class="form-control input-sm input350" />
|
||||
<input type="text" name="${curTorrentProvider.get_id()}_passkey" id="${curTorrentProvider.get_id()}_passkey" value="<%= starify(curTorrentProvider.passkey) %>" class="form-control input-sm input350" />
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curTorrentProvider, 'ratio'):
|
||||
#if $hasattr($curTorrentProvider, '_seed_ratio') and 'blackhole' != $sickbeard.TORRENT_METHOD:
|
||||
#set $torrent_method_text = {'blackhole': 'Black hole', 'utorrent': 'uTorrent', 'transmission': 'Transmission', 'deluge': 'Deluge', 'download_station': 'Synology DS', 'rtorrent': 'rTorrent'}
|
||||
<div class="field-pair">
|
||||
<label for="${curTorrentProvider.getID()}_ratio">
|
||||
<span class="component-title" id="${curTorrentProvider.getID()}_ratio_desc">Seed ratio:</span>
|
||||
<label for="${curTorrentProvider.get_id()}_ratio">
|
||||
<span class="component-title" id="${curTorrentProvider.get_id()}_ratio_desc">Seed until ratio (the goal)</span>
|
||||
<span class="component-desc">
|
||||
<input type="number" step="0.1" name="${curTorrentProvider.getID()}_ratio" id="${curTorrentProvider.getID()}_ratio" value="$curTorrentProvider.ratio" class="form-control input-sm input75" />
|
||||
</span>
|
||||
</label>
|
||||
<label>
|
||||
<span class="component-title"> </span>
|
||||
<span class="component-desc">
|
||||
<p>stop transfer when ratio is reached<br>(-1 SickGear default to seed forever, or leave blank for downloader default)</p>
|
||||
<input type="number" step="0.1" name="${curTorrentProvider.get_id()}_ratio" id="${curTorrentProvider.get_id()}_ratio" value="$curTorrentProvider._seed_ratio" class="form-control input-sm input75" />
|
||||
<p>this ratio is requested of each download sent to $torrent_method_text[$sickbeard.TORRENT_METHOD]</p>
|
||||
<div class="clear-left"><p>(set -1 to seed forever, or leave blank for the $torrent_method_text[$sickbeard.TORRENT_METHOD] default)</p></div>
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curTorrentProvider, 'minseed'):
|
||||
<div class="field-pair">
|
||||
<label for="${curTorrentProvider.getID()}_minseed">
|
||||
<span class="component-title" id="${curTorrentProvider.getID()}_minseed_desc">Minimum seeders:</span>
|
||||
<label for="${curTorrentProvider.get_id()}_minseed">
|
||||
<span class="component-title" id="${curTorrentProvider.get_id()}_minseed_desc">Minimum seeders</span>
|
||||
<span class="component-desc">
|
||||
<input type="number" name="${curTorrentProvider.getID()}_minseed" id="${curTorrentProvider.getID()}_minseed" value="$curTorrentProvider.minseed" class="form-control input-sm input75" />
|
||||
<input type="number" name="${curTorrentProvider.get_id()}_minseed" id="${curTorrentProvider.get_id()}_minseed" value="$curTorrentProvider.minseed" class="form-control input-sm input75" />
|
||||
<p>a release must have to be snatch worthy</p>
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curTorrentProvider, 'minleech'):
|
||||
<div class="field-pair">
|
||||
<label for="${curTorrentProvider.getID()}_minleech">
|
||||
<span class="component-title" id="${curTorrentProvider.getID()}_minleech_desc">Minimum leechers:</span>
|
||||
<label for="${curTorrentProvider.get_id()}_minleech">
|
||||
<span class="component-title" id="${curTorrentProvider.get_id()}_minleech_desc">Minimum leechers</span>
|
||||
<span class="component-desc">
|
||||
<input type="number" name="${curTorrentProvider.getID()}_minleech" id="${curTorrentProvider.getID()}_minleech" value="$curTorrentProvider.minleech" class="form-control input-sm input75" />
|
||||
<input type="number" name="${curTorrentProvider.get_id()}_minleech" id="${curTorrentProvider.get_id()}_minleech" value="$curTorrentProvider.minleech" class="form-control input-sm input75" />
|
||||
<p>a release must have to be snatch worthy</p>
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curTorrentProvider, 'proxy'):
|
||||
<div class="field-pair">
|
||||
<label for="${curTorrentProvider.getID()}_proxy">
|
||||
<label for="${curTorrentProvider.get_id()}_proxy">
|
||||
<span class="component-title">Access provider via proxy</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" class="enabler" name="${curTorrentProvider.getID()}_proxy" id="${curTorrentProvider.getID()}_proxy" <%= html_checked if curTorrentProvider.proxy.enabled else '' %>/>
|
||||
<input type="checkbox" class="enabler" name="${curTorrentProvider.get_id()}_proxy" id="${curTorrentProvider.get_id()}_proxy" <%= html_checked if curTorrentProvider.proxy.enabled else '' %>/>
|
||||
<p>to bypass country blocking mechanisms</p>
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
|
||||
#if $hasattr($curTorrentProvider.proxy, 'url'):
|
||||
<div class="field-pair content_${curTorrentProvider.getID()}_proxy" id="content_${curTorrentProvider.getID()}_proxy">
|
||||
<label for="${curTorrentProvider.getID()}_proxy_url">
|
||||
<div class="field-pair content_${curTorrentProvider.get_id()}_proxy" id="content_${curTorrentProvider.get_id()}_proxy">
|
||||
<label for="${curTorrentProvider.get_id()}_proxy_url">
|
||||
<span class="component-title">Proxy URL:</span>
|
||||
<span class="component-desc">
|
||||
<select name="${curTorrentProvider.getID()}_proxy_url" id="${curTorrentProvider.getID()}_proxy_url" class="form-control input-sm">
|
||||
<select name="${curTorrentProvider.get_id()}_proxy_url" id="${curTorrentProvider.get_id()}_proxy_url" class="form-control input-sm">
|
||||
#for $i in $curTorrentProvider.proxy.urls.keys():
|
||||
<option value="$curTorrentProvider.proxy.urls[$i]" <%= html_selected if curTorrentProvider.proxy.url == curTorrentProvider.proxy.urls[i] else '' %>>$i</option>
|
||||
#end for
|
||||
|
@ -486,85 +467,71 @@
|
|||
</div>
|
||||
#end if
|
||||
#end if
|
||||
|
||||
#if $hasattr($curTorrentProvider, 'confirmed'):
|
||||
<div class="field-pair">
|
||||
<label for="${curTorrentProvider.getID()}_confirmed">
|
||||
<label for="${curTorrentProvider.get_id()}_confirmed">
|
||||
<span class="component-title">Confirmed download</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="${curTorrentProvider.getID()}_confirmed" id="${curTorrentProvider.getID()}_confirmed" <%= html_checked if curTorrentProvider.confirmed else '' %>/>
|
||||
<input type="checkbox" name="${curTorrentProvider.get_id()}_confirmed" id="${curTorrentProvider.get_id()}_confirmed" <%= html_checked if curTorrentProvider.confirmed else '' %>/>
|
||||
<p>only download torrents from trusted or verified uploaders ?</p>
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curTorrentProvider, 'freeleech'):
|
||||
<div class="field-pair">
|
||||
<label for="${curTorrentProvider.getID()}_freeleech">
|
||||
<label for="${curTorrentProvider.get_id()}_freeleech">
|
||||
<span class="component-title">Freeleech</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="${curTorrentProvider.getID()}_freeleech" id="${curTorrentProvider.getID()}_freeleech" <%= html_checked if curTorrentProvider.freeleech else '' %>/>
|
||||
<input type="checkbox" name="${curTorrentProvider.get_id()}_freeleech" id="${curTorrentProvider.get_id()}_freeleech" <%= html_checked if curTorrentProvider.freeleech else '' %>/>
|
||||
<p>only download <b>[FreeLeech]</b> torrents.</p>
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curTorrentProvider, 'enable_recentsearch'):
|
||||
#if $hasattr($curTorrentProvider, 'enable_recentsearch') and $curTorrentProvider.supportsBacklog:
|
||||
<div class="field-pair">
|
||||
<label for="${curTorrentProvider.getID()}_enable_recentsearch">
|
||||
<label for="${curTorrentProvider.get_id()}_enable_recentsearch">
|
||||
<span class="component-title">Enable recent searches</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="${curTorrentProvider.getID()}_enable_recentsearch" id="${curTorrentProvider.getID()}_enable_recentsearch" <%= html_checked if curTorrentProvider.enable_recentsearch else '' %>/>
|
||||
<input type="checkbox" name="${curTorrentProvider.get_id()}_enable_recentsearch" id="${curTorrentProvider.get_id()}_enable_recentsearch" <%= html_checked if curTorrentProvider.enable_recentsearch else '' %>/>
|
||||
<p>enable provider to perform recent searches.</p>
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curTorrentProvider, 'enable_backlog'):
|
||||
#if $hasattr($curTorrentProvider, 'enable_backlog') and $curTorrentProvider.supportsBacklog:
|
||||
<div class="field-pair">
|
||||
<label for="${curTorrentProvider.getID()}_enable_backlog">
|
||||
<label for="${curTorrentProvider.get_id()}_enable_backlog">
|
||||
<span class="component-title">Enable backlog searches</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="${curTorrentProvider.getID()}_enable_backlog" id="${curTorrentProvider.getID()}_enable_backlog" <%= html_checked if curTorrentProvider.enable_backlog else '' %>/>
|
||||
<input type="checkbox" name="${curTorrentProvider.get_id()}_enable_backlog" id="${curTorrentProvider.get_id()}_enable_backlog" <%= html_checked if curTorrentProvider.enable_backlog else '' %>/>
|
||||
<p>enable provider to perform backlog searches.</p>
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curTorrentProvider, 'search_fallback'):
|
||||
#if $hasattr($curTorrentProvider, 'search_mode') and $curTorrentProvider.supportsBacklog:
|
||||
<div class="field-pair">
|
||||
<label for="${curTorrentProvider.getID()}_search_fallback">
|
||||
<span class="component-title">Season search fallback</span>
|
||||
<span class="component-desc">
|
||||
<input type="checkbox" name="${curTorrentProvider.getID()}_search_fallback" id="${curTorrentProvider.getID()}_search_fallback" <%= html_checked if curTorrentProvider.search_fallback else '' %>/>
|
||||
<p>when searching for a complete season depending on search mode you may return no results, this helps by restarting the search using the opposite search mode.</p>
|
||||
</span>
|
||||
</label>
|
||||
<span class="component-title">Season search mode</span>
|
||||
<span class="component-desc">
|
||||
<label class="space-right">
|
||||
<input type="radio" name="${curTorrentProvider.get_id()}_search_mode" id="${curTorrentProvider.get_id()}_search_mode_sponly" value="sponly" <%= html_checked if 'sponly' == curTorrentProvider.search_mode else '' %>/>season packs only
|
||||
</label>
|
||||
<label>
|
||||
<input type="radio" name="${curTorrentProvider.get_id()}_search_mode" id="${curTorrentProvider.get_id()}_search_mode_eponly" value="eponly" <%= html_checked if 'eponly' == curTorrentProvider.search_mode else '' %>/>episodes only
|
||||
</label>
|
||||
<p>when searching for complete seasons, search for packs or collect single episodes</p>
|
||||
</span>
|
||||
</div>
|
||||
#end if
|
||||
|
||||
#if $hasattr($curTorrentProvider, 'search_mode'):
|
||||
#if $hasattr($curTorrentProvider, 'search_fallback') and $curTorrentProvider.supportsBacklog:
|
||||
<div class="field-pair">
|
||||
<label>
|
||||
<span class="component-title">Season search mode</span>
|
||||
<label for="${curTorrentProvider.get_id()}_search_fallback">
|
||||
<span class="component-title">Season search fallback</span>
|
||||
<span class="component-desc">
|
||||
<p>when searching for complete seasons you can choose to have it look for season packs only, or choose to have it build a complete season from just single episodes.</p>
|
||||
</span>
|
||||
</label>
|
||||
<label>
|
||||
<span class="component-title"></span>
|
||||
<span class="component-desc">
|
||||
<input type="radio" name="${curTorrentProvider.getID()}_search_mode" id="${curTorrentProvider.getID()}_search_mode_sponly" value="sponly" <%= html_checked if 'sponly' == curTorrentProvider.search_mode else '' %>/>season packs only.
|
||||
</span>
|
||||
</label>
|
||||
<label>
|
||||
<span class="component-title"></span>
|
||||
<span class="component-desc">
|
||||
<input type="radio" name="${curTorrentProvider.getID()}_search_mode" id="${curTorrentProvider.getID()}_search_mode_eponly" value="eponly" <%= html_checked if 'eponly' == curTorrentProvider.search_mode else '' %>/>episodes only.
|
||||
<input type="checkbox" name="${curTorrentProvider.get_id()}_search_fallback" id="${curTorrentProvider.get_id()}_search_fallback" <%= html_checked if curTorrentProvider.search_fallback else '' %>/>
|
||||
<p>run the alternate season search mode when a complete season is not found</p>
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
|
|
|
@ -82,7 +82,7 @@
|
|||
#include $os.path.join($sickbeard.PROG_DIR, 'gui/slick/interfaces/default/inc_qualityChooser.tmpl')
|
||||
|
||||
#if $anyQualities + $bestQualities
|
||||
<div class="field-pair">
|
||||
<div class="field-pair show-if-quality-custom">
|
||||
<label for="archive_firstmatch">
|
||||
<span class="component-title">End upgrade on first match</span>
|
||||
<span class="component-desc">
|
||||
|
@ -211,6 +211,20 @@
|
|||
</label>
|
||||
</div>
|
||||
|
||||
<div class="field-pair#if $sickbeard.SHOWLIST_TAGVIEW != 'custom' then ' hidden' else ''#" style="margin-bottom:10px">
|
||||
<label for="tag">
|
||||
<span class="component-title">Show is in group</span>
|
||||
<span class="component-desc">
|
||||
<select name="tag" id="tag" class="form-control form-control-inline input-sm">
|
||||
#for $tag in $sickbeard.SHOW_TAGS:
|
||||
<option value="$tag" #if $tag == $show.tag then 'selected="selected"' else ''#>$tag#echo ('', ' (default)')['Show List' == $tag]#</option>
|
||||
#end for
|
||||
</select>
|
||||
<span>and is displayed on the show list page under this section</span>
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
|
||||
<div class="field-pair">
|
||||
<label for="sports">
|
||||
<span class="component-title">Show is sports</span>
|
||||
|
@ -231,20 +245,6 @@
|
|||
</label>
|
||||
</div>
|
||||
|
||||
<div class="field-pair#if $sickbeard.SHOWLIST_TAGVIEW != 'custom' then ' hidden' else ''#" style="margin-bottom:10px">
|
||||
<label for="tag">
|
||||
<span class="component-title">Show is grouped in</span>
|
||||
<span class="component-desc">
|
||||
<select name="tag" id="tag" class="form-control form-control-inline input-sm">
|
||||
#for $tag in $sickbeard.SHOW_TAGS:
|
||||
<option value="$tag" #if $tag == $show.tag then 'selected="selected"' else ''#>$tag#echo ('', ' (default)')['Show List' == $tag]#</option>
|
||||
#end for
|
||||
</select>
|
||||
<span>and displays on the show list page under this section</span>
|
||||
</span>
|
||||
</label>
|
||||
</div>
|
||||
|
||||
#if $show.is_anime:
|
||||
#import sickbeard.blackandwhitelist
|
||||
#include $os.path.join($sickbeard.PROG_DIR, 'gui/slick/interfaces/default/inc_blackwhitelist.tmpl')
|
||||
|
|
|
@ -139,9 +139,9 @@
|
|||
#else
|
||||
#if 0 < $hItem['provider']
|
||||
#if $curStatus in [SNATCHED, FAILED]
|
||||
#set $provider = $providers.getProviderClass($generic.GenericProvider.makeID($hItem['provider']))
|
||||
#set $provider = $providers.getProviderClass($generic.GenericProvider.make_id($hItem['provider']))
|
||||
#if None is not $provider
|
||||
<img src="$sbRoot/images/providers/<%= provider.imageName() %>" width="16" height="16" /><span>$provider.name</span>
|
||||
<img src="$sbRoot/images/providers/<%= provider.image_name() %>" width="16" height="16" /><span>$provider.name</span>
|
||||
#else
|
||||
<img src="$sbRoot/images/providers/missing.png" width="16" height="16" title="missing provider" /><span>Missing Provider</span>
|
||||
#end if
|
||||
|
@ -186,10 +186,10 @@
|
|||
#set $curStatus, $curQuality = $Quality.splitCompositeStatus(int($action['action']))
|
||||
#set $basename = $os.path.basename($action['resource'])
|
||||
#if $curStatus in [SNATCHED, FAILED]
|
||||
#set $provider = $providers.getProviderClass($generic.GenericProvider.makeID($action['provider']))
|
||||
#set $provider = $providers.getProviderClass($generic.GenericProvider.make_id($action['provider']))
|
||||
#if None is not $provider
|
||||
#set $prov_list += ['<span%s><img class="help" src="%s/images/providers/%s" width="16" height="16" alt="%s" title="%s.. %s: %s" /></span>'\
|
||||
% (('', ' class="fail"')[FAILED == $curStatus], $sbRoot, $provider.imageName(), $provider.name,
|
||||
% (('', ' class="fail"')[FAILED == $curStatus], $sbRoot, $provider.image_name(), $provider.name,
|
||||
('%s%s' % ($order, 'th' if $order in [11, 12, 13] or str($order)[-1] not in $ordinal_indicators else $ordinal_indicators[str($order)[-1]]), 'Snatch failed')[FAILED == $curStatus],
|
||||
$provider.name, $basename)]
|
||||
#set $order += (0, 1)[SNATCHED == $curStatus]
|
||||
|
|
|
@ -19,7 +19,7 @@
|
|||
</div>
|
||||
|
||||
<div id="customQualityWrapper">
|
||||
<div id="customQuality">
|
||||
<div id="customQuality" class="show-if-quality-custom">
|
||||
<div class="component-group-desc">
|
||||
<p>An <em>Initial</em> quality episode must be found before an <em>Upgrade to</em> selection is considered.</p>
|
||||
</div>
|
||||
|
|
|
@ -72,9 +72,9 @@
|
|||
<tr>
|
||||
<td class="text-nowrap text-left">#echo re.sub('"', '', $hItem['release'])#</td>
|
||||
<td>#echo ($hItem['size'], '?')[-1 == $hItem['size']]#</td>
|
||||
#set $provider = $providers.getProviderClass($generic.GenericProvider.makeID($hItem['provider']))
|
||||
#set $provider = $providers.getProviderClass($generic.GenericProvider.make_id($hItem['provider']))
|
||||
#if None is not $provider:
|
||||
<td><img src="$sbRoot/images/providers/<%= provider.imageName() %>" width="16" height="16" alt="$provider.name" title="$provider.name" /></td>
|
||||
<td><img src="$sbRoot/images/providers/<%= provider.image_name() %>" width="16" height="16" alt="$provider.name" title="$provider.name" /></td>
|
||||
#else
|
||||
<td><img src="$sbRoot/images/providers/missing.png" width="16" height="16" alt="missing provider" title="missing provider" /></td>
|
||||
#end if
|
||||
|
|
|
@ -489,10 +489,11 @@ $(document).ready(function(){
|
|||
$.get(sbRoot + '/home/getPushbulletDevices', {'accessToken': pushbullet_access_token})
|
||||
.done(function (data) {
|
||||
var devices = jQuery.parseJSON(data || '{}').devices;
|
||||
var error = jQuery.parseJSON(data || '{}').error;
|
||||
$('#pushbullet_device_list').html('');
|
||||
if (devices) {
|
||||
// add default option to send to all devices
|
||||
$('#pushbullet_device_list').append('<option value="" selected="selected">-- All Devices --</option>');
|
||||
if (devices) {
|
||||
for (var i = 0; i < devices.length; i++) {
|
||||
// only list active device targets
|
||||
if (devices[i].active == true) {
|
||||
|
@ -507,7 +508,11 @@ $(document).ready(function(){
|
|||
}
|
||||
$('#getPushbulletDevices').prop('disabled', false);
|
||||
if (msg) {
|
||||
$('#testPushbullet-result').html(msg);
|
||||
if (error.message) {
|
||||
$('#testPushbullet-result').html(error.message);
|
||||
} else {
|
||||
$('#testPushbullet-result').html(msg);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
function setFromPresets (preset) {
|
||||
var elCustomQuality = $('#customQuality'),
|
||||
var elCustomQuality = $('.show-if-quality-custom'),
|
||||
selected = 'selected';
|
||||
if (0 == preset) {
|
||||
elCustomQuality.show();
|
||||
|
|
|
@ -40,8 +40,8 @@ $(document).ready(function() {
|
|||
if (is_default)
|
||||
setDefault($('#rootDirs option').attr('id'));
|
||||
|
||||
$.get(sbRoot+'/config/general/saveRootDirs', { rootDirString: $('#rootDirText').val() });
|
||||
refreshRootDirs();
|
||||
$.get(sbRoot+'/config/general/saveRootDirs', { rootDirString: $('#rootDirText').val() });
|
||||
}
|
||||
|
||||
function editRootDir(path) {
|
||||
|
|
|
@ -45,7 +45,7 @@ from .element import (
|
|||
|
||||
# The very first thing we do is give a useful error if someone is
|
||||
# running this code under Python 3 without converting it.
|
||||
syntax_error = u'You are trying to run the Python 2 version of Beautiful Soup under Python 3. This will not work. You need to convert the code, either by installing it (`python setup.py install`) or by running 2to3 (`2to3 -w bs4`).'
|
||||
'You are trying to run the Python 2 version of Beautiful Soup under Python 3. This will not work.'<>'You need to convert the code, either by installing it (`python setup.py install`) or by running 2to3 (`2to3 -w bs4`).'
|
||||
|
||||
class BeautifulSoup(Tag):
|
||||
"""
|
||||
|
@ -77,6 +77,8 @@ class BeautifulSoup(Tag):
|
|||
|
||||
ASCII_SPACES = '\x20\x0a\x09\x0c\x0d'
|
||||
|
||||
NO_PARSER_SPECIFIED_WARNING = "No parser was explicitly specified, so I'm using the best available parser for this system (\"%(parser)s\"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.\n\nTo get rid of this warning, change this:\n\n BeautifulSoup([your markup])\n\nto this:\n\n BeautifulSoup([your markup], \"%(parser)s\")\n"
|
||||
|
||||
def __init__(self, markup="", features=None, builder=None,
|
||||
parse_only=None, from_encoding=None, **kwargs):
|
||||
"""The Soup object is initialized as the 'root tag', and the
|
||||
|
@ -114,9 +116,9 @@ class BeautifulSoup(Tag):
|
|||
del kwargs['isHTML']
|
||||
warnings.warn(
|
||||
"BS4 does not respect the isHTML argument to the "
|
||||
"BeautifulSoup constructor. You can pass in features='html' "
|
||||
"or features='xml' to get a builder capable of handling "
|
||||
"one or the other.")
|
||||
"BeautifulSoup constructor. Suggest you use "
|
||||
"features='lxml' for HTML and features='lxml-xml' for "
|
||||
"XML.")
|
||||
|
||||
def deprecated_argument(old_name, new_name):
|
||||
if old_name in kwargs:
|
||||
|
@ -140,6 +142,7 @@ class BeautifulSoup(Tag):
|
|||
"__init__() got an unexpected keyword argument '%s'" % arg)
|
||||
|
||||
if builder is None:
|
||||
original_features = features
|
||||
if isinstance(features, basestring):
|
||||
features = [features]
|
||||
if features is None or len(features) == 0:
|
||||
|
@ -151,6 +154,11 @@ class BeautifulSoup(Tag):
|
|||
"requested: %s. Do you need to install a parser library?"
|
||||
% ",".join(features))
|
||||
builder = builder_class()
|
||||
if not (original_features == builder.NAME or
|
||||
original_features in builder.ALTERNATE_NAMES):
|
||||
warnings.warn(self.NO_PARSER_SPECIFIED_WARNING % dict(
|
||||
parser=builder.NAME))
|
||||
|
||||
self.builder = builder
|
||||
self.is_xml = builder.is_xml
|
||||
self.builder.soup = self
|
||||
|
@ -178,6 +186,8 @@ class BeautifulSoup(Tag):
|
|||
# system. Just let it go.
|
||||
pass
|
||||
if is_file:
|
||||
if isinstance(markup, unicode):
|
||||
markup = markup.encode("utf8")
|
||||
warnings.warn(
|
||||
'"%s" looks like a filename, not markup. You should probably open this file and pass the filehandle into Beautiful Soup.' % markup)
|
||||
if markup[:5] == "http:" or markup[:6] == "https:":
|
||||
|
@ -185,6 +195,8 @@ class BeautifulSoup(Tag):
|
|||
# Python 3 otherwise.
|
||||
if ((isinstance(markup, bytes) and not b' ' in markup)
|
||||
or (isinstance(markup, unicode) and not u' ' in markup)):
|
||||
if isinstance(markup, unicode):
|
||||
markup = markup.encode("utf8")
|
||||
warnings.warn(
|
||||
'"%s" looks like a URL. Beautiful Soup is not an HTTP client. You should probably use an HTTP client to get the document behind the URL, and feed that document to Beautiful Soup.' % markup)
|
||||
|
||||
|
|
|
@ -80,6 +80,8 @@ builder_registry = TreeBuilderRegistry()
|
|||
class TreeBuilder(object):
|
||||
"""Turn a document into a Beautiful Soup object tree."""
|
||||
|
||||
NAME = "[Unknown tree builder]"
|
||||
ALTERNATE_NAMES = []
|
||||
features = []
|
||||
|
||||
is_xml = False
|
||||
|
|
|
@ -22,7 +22,9 @@ from bs4.element import (
|
|||
class HTML5TreeBuilder(HTMLTreeBuilder):
|
||||
"""Use html5lib to build a tree."""
|
||||
|
||||
features = ['html5lib', PERMISSIVE, HTML_5, HTML]
|
||||
NAME = "html5lib"
|
||||
|
||||
features = [NAME, PERMISSIVE, HTML_5, HTML]
|
||||
|
||||
def prepare_markup(self, markup, user_specified_encoding):
|
||||
# Store the user-specified encoding for use later on.
|
||||
|
@ -161,6 +163,12 @@ class Element(html5lib.treebuilders._base.Node):
|
|||
# immediately after the parent, if it has no children.)
|
||||
if self.element.contents:
|
||||
most_recent_element = self.element._last_descendant(False)
|
||||
elif self.element.next_element is not None:
|
||||
# Something from further ahead in the parse tree is
|
||||
# being inserted into this earlier element. This is
|
||||
# very annoying because it means an expensive search
|
||||
# for the last element in the tree.
|
||||
most_recent_element = self.soup._last_descendant()
|
||||
else:
|
||||
most_recent_element = self.element
|
||||
|
||||
|
|
|
@ -19,10 +19,8 @@ import warnings
|
|||
# At the end of this file, we monkeypatch HTMLParser so that
|
||||
# strict=True works well on Python 3.2.2.
|
||||
major, minor, release = sys.version_info[:3]
|
||||
CONSTRUCTOR_TAKES_STRICT = (
|
||||
major > 3
|
||||
or (major == 3 and minor > 2)
|
||||
or (major == 3 and minor == 2 and release >= 3))
|
||||
CONSTRUCTOR_TAKES_STRICT = major == 3 and minor == 2 and release >= 3
|
||||
CONSTRUCTOR_TAKES_CONVERT_CHARREFS = major == 3 and minor >= 4
|
||||
|
||||
from bs4.element import (
|
||||
CData,
|
||||
|
@ -63,7 +61,8 @@ class BeautifulSoupHTMLParser(HTMLParser):
|
|||
|
||||
def handle_charref(self, name):
|
||||
# XXX workaround for a bug in HTMLParser. Remove this once
|
||||
# it's fixed.
|
||||
# it's fixed in all supported versions.
|
||||
# http://bugs.python.org/issue13633
|
||||
if name.startswith('x'):
|
||||
real_name = int(name.lstrip('x'), 16)
|
||||
elif name.startswith('X'):
|
||||
|
@ -113,14 +112,6 @@ class BeautifulSoupHTMLParser(HTMLParser):
|
|||
|
||||
def handle_pi(self, data):
|
||||
self.soup.endData()
|
||||
if data.endswith("?") and data.lower().startswith("xml"):
|
||||
# "An XHTML processing instruction using the trailing '?'
|
||||
# will cause the '?' to be included in data." - HTMLParser
|
||||
# docs.
|
||||
#
|
||||
# Strip the question mark so we don't end up with two
|
||||
# question marks.
|
||||
data = data[:-1]
|
||||
self.soup.handle_data(data)
|
||||
self.soup.endData(ProcessingInstruction)
|
||||
|
||||
|
@ -128,11 +119,14 @@ class BeautifulSoupHTMLParser(HTMLParser):
|
|||
class HTMLParserTreeBuilder(HTMLTreeBuilder):
|
||||
|
||||
is_xml = False
|
||||
features = [HTML, STRICT, HTMLPARSER]
|
||||
NAME = HTMLPARSER
|
||||
features = [NAME, HTML, STRICT]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
if CONSTRUCTOR_TAKES_STRICT:
|
||||
kwargs['strict'] = False
|
||||
if CONSTRUCTOR_TAKES_CONVERT_CHARREFS:
|
||||
kwargs['convert_charrefs'] = False
|
||||
self.parser_args = (args, kwargs)
|
||||
|
||||
def prepare_markup(self, markup, user_specified_encoding=None,
|
||||
|
|
|
@ -7,7 +7,12 @@ from io import BytesIO
|
|||
from StringIO import StringIO
|
||||
import collections
|
||||
from lxml import etree
|
||||
from bs4.element import Comment, Doctype, NamespacedAttribute
|
||||
from bs4.element import (
|
||||
Comment,
|
||||
Doctype,
|
||||
NamespacedAttribute,
|
||||
ProcessingInstruction,
|
||||
)
|
||||
from bs4.builder import (
|
||||
FAST,
|
||||
HTML,
|
||||
|
@ -25,8 +30,10 @@ class LXMLTreeBuilderForXML(TreeBuilder):
|
|||
|
||||
is_xml = True
|
||||
|
||||
NAME = "lxml-xml"
|
||||
|
||||
# Well, it's permissive by XML parser standards.
|
||||
features = [LXML, XML, FAST, PERMISSIVE]
|
||||
features = [NAME, LXML, XML, FAST, PERMISSIVE]
|
||||
|
||||
CHUNK_SIZE = 512
|
||||
|
||||
|
@ -189,7 +196,9 @@ class LXMLTreeBuilderForXML(TreeBuilder):
|
|||
self.nsmaps.pop()
|
||||
|
||||
def pi(self, target, data):
|
||||
pass
|
||||
self.soup.endData()
|
||||
self.soup.handle_data(target + ' ' + data)
|
||||
self.soup.endData(ProcessingInstruction)
|
||||
|
||||
def data(self, content):
|
||||
self.soup.handle_data(content)
|
||||
|
@ -212,7 +221,10 @@ class LXMLTreeBuilderForXML(TreeBuilder):
|
|||
|
||||
class LXMLTreeBuilder(HTMLTreeBuilder, LXMLTreeBuilderForXML):
|
||||
|
||||
features = [LXML, HTML, FAST, PERMISSIVE]
|
||||
NAME = LXML
|
||||
ALTERNATE_NAMES = ["lxml-html"]
|
||||
|
||||
features = ALTERNATE_NAMES + [NAME, HTML, FAST, PERMISSIVE]
|
||||
is_xml = False
|
||||
|
||||
def default_parser(self, encoding):
|
||||
|
|
|
@ -548,17 +548,17 @@ class PageElement(object):
|
|||
|
||||
# Methods for supporting CSS selectors.
|
||||
|
||||
tag_name_re = re.compile('^[a-z0-9]+$')
|
||||
tag_name_re = re.compile('^[a-zA-Z0-9][-.a-zA-Z0-9:_]*$')
|
||||
|
||||
# /^(\w+)\[(\w+)([=~\|\^\$\*]?)=?"?([^\]"]*)"?\]$/
|
||||
# \---/ \---/\-------------/ \-------/
|
||||
# | | | |
|
||||
# | | | The value
|
||||
# | | ~,|,^,$,* or =
|
||||
# | Attribute
|
||||
# /^([a-zA-Z0-9][-.a-zA-Z0-9:_]*)\[(\w+)([=~\|\^\$\*]?)=?"?([^\]"]*)"?\]$/
|
||||
# \---------------------------/ \---/\-------------/ \-------/
|
||||
# | | | |
|
||||
# | | | The value
|
||||
# | | ~,|,^,$,* or =
|
||||
# | Attribute
|
||||
# Tag
|
||||
attribselect_re = re.compile(
|
||||
r'^(?P<tag>\w+)?\[(?P<attribute>\w+)(?P<operator>[=~\|\^\$\*]?)' +
|
||||
r'^(?P<tag>[a-zA-Z0-9][-.a-zA-Z0-9:_]*)?\[(?P<attribute>\w+)(?P<operator>[=~\|\^\$\*]?)' +
|
||||
r'=?"?(?P<value>[^\]"]*)"?\]$'
|
||||
)
|
||||
|
||||
|
@ -707,7 +707,7 @@ class CData(PreformattedString):
|
|||
class ProcessingInstruction(PreformattedString):
|
||||
|
||||
PREFIX = u'<?'
|
||||
SUFFIX = u'?>'
|
||||
SUFFIX = u'>'
|
||||
|
||||
class Comment(PreformattedString):
|
||||
|
||||
|
@ -1203,192 +1203,206 @@ class Tag(PageElement):
|
|||
_select_debug = False
|
||||
def select(self, selector, _candidate_generator=None):
|
||||
"""Perform a CSS selection operation on the current element."""
|
||||
tokens = selector.split()
|
||||
|
||||
# Remove whitespace directly after the grouping operator ','
|
||||
# then split into tokens.
|
||||
tokens = re.sub(',[\s]*',',', selector).split()
|
||||
current_context = [self]
|
||||
|
||||
if tokens[-1] in self._selector_combinators:
|
||||
raise ValueError(
|
||||
'Final combinator "%s" is missing an argument.' % tokens[-1])
|
||||
|
||||
if self._select_debug:
|
||||
print 'Running CSS selector "%s"' % selector
|
||||
for index, token in enumerate(tokens):
|
||||
if self._select_debug:
|
||||
print ' Considering token "%s"' % token
|
||||
recursive_candidate_generator = None
|
||||
tag_name = None
|
||||
|
||||
for index, token_group in enumerate(tokens):
|
||||
new_context = []
|
||||
new_context_ids = set([])
|
||||
|
||||
# Grouping selectors, ie: p,a
|
||||
grouped_tokens = token_group.split(',')
|
||||
if '' in grouped_tokens:
|
||||
raise ValueError('Invalid group selection syntax: %s' % token_group)
|
||||
|
||||
if tokens[index-1] in self._selector_combinators:
|
||||
# This token was consumed by the previous combinator. Skip it.
|
||||
if self._select_debug:
|
||||
print ' Token was consumed by the previous combinator.'
|
||||
continue
|
||||
# Each operation corresponds to a checker function, a rule
|
||||
# for determining whether a candidate matches the
|
||||
# selector. Candidates are generated by the active
|
||||
# iterator.
|
||||
checker = None
|
||||
|
||||
m = self.attribselect_re.match(token)
|
||||
if m is not None:
|
||||
# Attribute selector
|
||||
tag_name, attribute, operator, value = m.groups()
|
||||
checker = self._attribute_checker(operator, attribute, value)
|
||||
for token in grouped_tokens:
|
||||
if self._select_debug:
|
||||
print ' Considering token "%s"' % token
|
||||
recursive_candidate_generator = None
|
||||
tag_name = None
|
||||
|
||||
elif '#' in token:
|
||||
# ID selector
|
||||
tag_name, tag_id = token.split('#', 1)
|
||||
def id_matches(tag):
|
||||
return tag.get('id', None) == tag_id
|
||||
checker = id_matches
|
||||
# Each operation corresponds to a checker function, a rule
|
||||
# for determining whether a candidate matches the
|
||||
# selector. Candidates are generated by the active
|
||||
# iterator.
|
||||
checker = None
|
||||
|
||||
elif '.' in token:
|
||||
# Class selector
|
||||
tag_name, klass = token.split('.', 1)
|
||||
classes = set(klass.split('.'))
|
||||
def classes_match(candidate):
|
||||
return classes.issubset(candidate.get('class', []))
|
||||
checker = classes_match
|
||||
m = self.attribselect_re.match(token)
|
||||
if m is not None:
|
||||
# Attribute selector
|
||||
tag_name, attribute, operator, value = m.groups()
|
||||
checker = self._attribute_checker(operator, attribute, value)
|
||||
|
||||
elif ':' in token:
|
||||
# Pseudo-class
|
||||
tag_name, pseudo = token.split(':', 1)
|
||||
if tag_name == '':
|
||||
raise ValueError(
|
||||
"A pseudo-class must be prefixed with a tag name.")
|
||||
pseudo_attributes = re.match('([a-zA-Z\d-]+)\(([a-zA-Z\d]+)\)', pseudo)
|
||||
found = []
|
||||
if pseudo_attributes is not None:
|
||||
pseudo_type, pseudo_value = pseudo_attributes.groups()
|
||||
if pseudo_type == 'nth-of-type':
|
||||
try:
|
||||
pseudo_value = int(pseudo_value)
|
||||
except:
|
||||
elif '#' in token:
|
||||
# ID selector
|
||||
tag_name, tag_id = token.split('#', 1)
|
||||
def id_matches(tag):
|
||||
return tag.get('id', None) == tag_id
|
||||
checker = id_matches
|
||||
|
||||
elif '.' in token:
|
||||
# Class selector
|
||||
tag_name, klass = token.split('.', 1)
|
||||
classes = set(klass.split('.'))
|
||||
def classes_match(candidate):
|
||||
return classes.issubset(candidate.get('class', []))
|
||||
checker = classes_match
|
||||
|
||||
elif ':' in token:
|
||||
# Pseudo-class
|
||||
tag_name, pseudo = token.split(':', 1)
|
||||
if tag_name == '':
|
||||
raise ValueError(
|
||||
"A pseudo-class must be prefixed with a tag name.")
|
||||
pseudo_attributes = re.match('([a-zA-Z\d-]+)\(([a-zA-Z\d]+)\)', pseudo)
|
||||
found = []
|
||||
if pseudo_attributes is not None:
|
||||
pseudo_type, pseudo_value = pseudo_attributes.groups()
|
||||
if pseudo_type == 'nth-of-type':
|
||||
try:
|
||||
pseudo_value = int(pseudo_value)
|
||||
except:
|
||||
raise NotImplementedError(
|
||||
'Only numeric values are currently supported for the nth-of-type pseudo-class.')
|
||||
if pseudo_value < 1:
|
||||
raise ValueError(
|
||||
'nth-of-type pseudo-class value must be at least 1.')
|
||||
class Counter(object):
|
||||
def __init__(self, destination):
|
||||
self.count = 0
|
||||
self.destination = destination
|
||||
|
||||
def nth_child_of_type(self, tag):
|
||||
self.count += 1
|
||||
if self.count == self.destination:
|
||||
return True
|
||||
if self.count > self.destination:
|
||||
# Stop the generator that's sending us
|
||||
# these things.
|
||||
raise StopIteration()
|
||||
return False
|
||||
checker = Counter(pseudo_value).nth_child_of_type
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
'Only numeric values are currently supported for the nth-of-type pseudo-class.')
|
||||
if pseudo_value < 1:
|
||||
raise ValueError(
|
||||
'nth-of-type pseudo-class value must be at least 1.')
|
||||
class Counter(object):
|
||||
def __init__(self, destination):
|
||||
self.count = 0
|
||||
self.destination = destination
|
||||
'Only the following pseudo-classes are implemented: nth-of-type.')
|
||||
|
||||
def nth_child_of_type(self, tag):
|
||||
self.count += 1
|
||||
if self.count == self.destination:
|
||||
return True
|
||||
if self.count > self.destination:
|
||||
# Stop the generator that's sending us
|
||||
# these things.
|
||||
raise StopIteration()
|
||||
return False
|
||||
checker = Counter(pseudo_value).nth_child_of_type
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
'Only the following pseudo-classes are implemented: nth-of-type.')
|
||||
elif token == '*':
|
||||
# Star selector -- matches everything
|
||||
pass
|
||||
elif token == '>':
|
||||
# Run the next token as a CSS selector against the
|
||||
# direct children of each tag in the current context.
|
||||
recursive_candidate_generator = lambda tag: tag.children
|
||||
elif token == '~':
|
||||
# Run the next token as a CSS selector against the
|
||||
# siblings of each tag in the current context.
|
||||
recursive_candidate_generator = lambda tag: tag.next_siblings
|
||||
elif token == '+':
|
||||
# For each tag in the current context, run the next
|
||||
# token as a CSS selector against the tag's next
|
||||
# sibling that's a tag.
|
||||
def next_tag_sibling(tag):
|
||||
yield tag.find_next_sibling(True)
|
||||
recursive_candidate_generator = next_tag_sibling
|
||||
|
||||
elif token == '*':
|
||||
# Star selector -- matches everything
|
||||
pass
|
||||
elif token == '>':
|
||||
# Run the next token as a CSS selector against the
|
||||
# direct children of each tag in the current context.
|
||||
recursive_candidate_generator = lambda tag: tag.children
|
||||
elif token == '~':
|
||||
# Run the next token as a CSS selector against the
|
||||
# siblings of each tag in the current context.
|
||||
recursive_candidate_generator = lambda tag: tag.next_siblings
|
||||
elif token == '+':
|
||||
# For each tag in the current context, run the next
|
||||
# token as a CSS selector against the tag's next
|
||||
# sibling that's a tag.
|
||||
def next_tag_sibling(tag):
|
||||
yield tag.find_next_sibling(True)
|
||||
recursive_candidate_generator = next_tag_sibling
|
||||
|
||||
elif self.tag_name_re.match(token):
|
||||
# Just a tag name.
|
||||
tag_name = token
|
||||
else:
|
||||
raise ValueError(
|
||||
'Unsupported or invalid CSS selector: "%s"' % token)
|
||||
|
||||
if recursive_candidate_generator:
|
||||
# This happens when the selector looks like "> foo".
|
||||
#
|
||||
# The generator calls select() recursively on every
|
||||
# member of the current context, passing in a different
|
||||
# candidate generator and a different selector.
|
||||
#
|
||||
# In the case of "> foo", the candidate generator is
|
||||
# one that yields a tag's direct children (">"), and
|
||||
# the selector is "foo".
|
||||
next_token = tokens[index+1]
|
||||
def recursive_select(tag):
|
||||
if self._select_debug:
|
||||
print ' Calling select("%s") recursively on %s %s' % (next_token, tag.name, tag.attrs)
|
||||
print '-' * 40
|
||||
for i in tag.select(next_token, recursive_candidate_generator):
|
||||
if self._select_debug:
|
||||
print '(Recursive select picked up candidate %s %s)' % (i.name, i.attrs)
|
||||
yield i
|
||||
if self._select_debug:
|
||||
print '-' * 40
|
||||
_use_candidate_generator = recursive_select
|
||||
elif _candidate_generator is None:
|
||||
# By default, a tag's candidates are all of its
|
||||
# children. If tag_name is defined, only yield tags
|
||||
# with that name.
|
||||
if self._select_debug:
|
||||
if tag_name:
|
||||
check = "[any]"
|
||||
else:
|
||||
check = tag_name
|
||||
print ' Default candidate generator, tag name="%s"' % check
|
||||
if self._select_debug:
|
||||
# This is redundant with later code, but it stops
|
||||
# a bunch of bogus tags from cluttering up the
|
||||
# debug log.
|
||||
def default_candidate_generator(tag):
|
||||
for child in tag.descendants:
|
||||
if not isinstance(child, Tag):
|
||||
continue
|
||||
if tag_name and not child.name == tag_name:
|
||||
continue
|
||||
yield child
|
||||
_use_candidate_generator = default_candidate_generator
|
||||
elif self.tag_name_re.match(token):
|
||||
# Just a tag name.
|
||||
tag_name = token
|
||||
else:
|
||||
_use_candidate_generator = lambda tag: tag.descendants
|
||||
else:
|
||||
_use_candidate_generator = _candidate_generator
|
||||
|
||||
new_context = []
|
||||
new_context_ids = set([])
|
||||
for tag in current_context:
|
||||
if self._select_debug:
|
||||
print " Running candidate generator on %s %s" % (
|
||||
tag.name, repr(tag.attrs))
|
||||
for candidate in _use_candidate_generator(tag):
|
||||
if not isinstance(candidate, Tag):
|
||||
continue
|
||||
if tag_name and candidate.name != tag_name:
|
||||
continue
|
||||
if checker is not None:
|
||||
try:
|
||||
result = checker(candidate)
|
||||
except StopIteration:
|
||||
# The checker has decided we should no longer
|
||||
# run the generator.
|
||||
break
|
||||
if checker is None or result:
|
||||
raise ValueError(
|
||||
'Unsupported or invalid CSS selector: "%s"' % token)
|
||||
if recursive_candidate_generator:
|
||||
# This happens when the selector looks like "> foo".
|
||||
#
|
||||
# The generator calls select() recursively on every
|
||||
# member of the current context, passing in a different
|
||||
# candidate generator and a different selector.
|
||||
#
|
||||
# In the case of "> foo", the candidate generator is
|
||||
# one that yields a tag's direct children (">"), and
|
||||
# the selector is "foo".
|
||||
next_token = tokens[index+1]
|
||||
def recursive_select(tag):
|
||||
if self._select_debug:
|
||||
print " SUCCESS %s %s" % (candidate.name, repr(candidate.attrs))
|
||||
if id(candidate) not in new_context_ids:
|
||||
# If a tag matches a selector more than once,
|
||||
# don't include it in the context more than once.
|
||||
new_context.append(candidate)
|
||||
new_context_ids.add(id(candidate))
|
||||
elif self._select_debug:
|
||||
print " FAILURE %s %s" % (candidate.name, repr(candidate.attrs))
|
||||
print ' Calling select("%s") recursively on %s %s' % (next_token, tag.name, tag.attrs)
|
||||
print '-' * 40
|
||||
for i in tag.select(next_token, recursive_candidate_generator):
|
||||
if self._select_debug:
|
||||
print '(Recursive select picked up candidate %s %s)' % (i.name, i.attrs)
|
||||
yield i
|
||||
if self._select_debug:
|
||||
print '-' * 40
|
||||
_use_candidate_generator = recursive_select
|
||||
elif _candidate_generator is None:
|
||||
# By default, a tag's candidates are all of its
|
||||
# children. If tag_name is defined, only yield tags
|
||||
# with that name.
|
||||
if self._select_debug:
|
||||
if tag_name:
|
||||
check = "[any]"
|
||||
else:
|
||||
check = tag_name
|
||||
print ' Default candidate generator, tag name="%s"' % check
|
||||
if self._select_debug:
|
||||
# This is redundant with later code, but it stops
|
||||
# a bunch of bogus tags from cluttering up the
|
||||
# debug log.
|
||||
def default_candidate_generator(tag):
|
||||
for child in tag.descendants:
|
||||
if not isinstance(child, Tag):
|
||||
continue
|
||||
if tag_name and not child.name == tag_name:
|
||||
continue
|
||||
yield child
|
||||
_use_candidate_generator = default_candidate_generator
|
||||
else:
|
||||
_use_candidate_generator = lambda tag: tag.descendants
|
||||
else:
|
||||
_use_candidate_generator = _candidate_generator
|
||||
|
||||
for tag in current_context:
|
||||
if self._select_debug:
|
||||
print " Running candidate generator on %s %s" % (
|
||||
tag.name, repr(tag.attrs))
|
||||
for candidate in _use_candidate_generator(tag):
|
||||
if not isinstance(candidate, Tag):
|
||||
continue
|
||||
if tag_name and candidate.name != tag_name:
|
||||
continue
|
||||
if checker is not None:
|
||||
try:
|
||||
result = checker(candidate)
|
||||
except StopIteration:
|
||||
# The checker has decided we should no longer
|
||||
# run the generator.
|
||||
break
|
||||
if checker is None or result:
|
||||
if self._select_debug:
|
||||
print " SUCCESS %s %s" % (candidate.name, repr(candidate.attrs))
|
||||
if id(candidate) not in new_context_ids:
|
||||
# If a tag matches a selector more than once,
|
||||
# don't include it in the context more than once.
|
||||
new_context.append(candidate)
|
||||
new_context_ids.add(id(candidate))
|
||||
elif self._select_debug:
|
||||
print " FAILURE %s %s" % (candidate.name, repr(candidate.attrs))
|
||||
|
||||
|
||||
current_context = new_context
|
||||
|
||||
|
|
|
@ -1,592 +0,0 @@
|
|||
"""Helper classes for tests."""
|
||||
|
||||
import copy
|
||||
import functools
|
||||
import unittest
|
||||
from unittest import TestCase
|
||||
from bs4 import BeautifulSoup
|
||||
from bs4.element import (
|
||||
CharsetMetaAttributeValue,
|
||||
Comment,
|
||||
ContentMetaAttributeValue,
|
||||
Doctype,
|
||||
SoupStrainer,
|
||||
)
|
||||
|
||||
from bs4.builder import HTMLParserTreeBuilder
|
||||
default_builder = HTMLParserTreeBuilder
|
||||
|
||||
|
||||
class SoupTest(unittest.TestCase):
|
||||
|
||||
@property
|
||||
def default_builder(self):
|
||||
return default_builder()
|
||||
|
||||
def soup(self, markup, **kwargs):
|
||||
"""Build a Beautiful Soup object from markup."""
|
||||
builder = kwargs.pop('builder', self.default_builder)
|
||||
return BeautifulSoup(markup, builder=builder, **kwargs)
|
||||
|
||||
def document_for(self, markup):
|
||||
"""Turn an HTML fragment into a document.
|
||||
|
||||
The details depend on the builder.
|
||||
"""
|
||||
return self.default_builder.test_fragment_to_document(markup)
|
||||
|
||||
def assertSoupEquals(self, to_parse, compare_parsed_to=None):
|
||||
builder = self.default_builder
|
||||
obj = BeautifulSoup(to_parse, builder=builder)
|
||||
if compare_parsed_to is None:
|
||||
compare_parsed_to = to_parse
|
||||
|
||||
self.assertEqual(obj.decode(), self.document_for(compare_parsed_to))
|
||||
|
||||
|
||||
class HTMLTreeBuilderSmokeTest(object):
|
||||
|
||||
"""A basic test of a treebuilder's competence.
|
||||
|
||||
Any HTML treebuilder, present or future, should be able to pass
|
||||
these tests. With invalid markup, there's room for interpretation,
|
||||
and different parsers can handle it differently. But with the
|
||||
markup in these tests, there's not much room for interpretation.
|
||||
"""
|
||||
|
||||
def assertDoctypeHandled(self, doctype_fragment):
|
||||
"""Assert that a given doctype string is handled correctly."""
|
||||
doctype_str, soup = self._document_with_doctype(doctype_fragment)
|
||||
|
||||
# Make sure a Doctype object was created.
|
||||
doctype = soup.contents[0]
|
||||
self.assertEqual(doctype.__class__, Doctype)
|
||||
self.assertEqual(doctype, doctype_fragment)
|
||||
self.assertEqual(str(soup)[:len(doctype_str)], doctype_str)
|
||||
|
||||
# Make sure that the doctype was correctly associated with the
|
||||
# parse tree and that the rest of the document parsed.
|
||||
self.assertEqual(soup.p.contents[0], 'foo')
|
||||
|
||||
def _document_with_doctype(self, doctype_fragment):
|
||||
"""Generate and parse a document with the given doctype."""
|
||||
doctype = '<!DOCTYPE %s>' % doctype_fragment
|
||||
markup = doctype + '\n<p>foo</p>'
|
||||
soup = self.soup(markup)
|
||||
return doctype, soup
|
||||
|
||||
def test_normal_doctypes(self):
|
||||
"""Make sure normal, everyday HTML doctypes are handled correctly."""
|
||||
self.assertDoctypeHandled("html")
|
||||
self.assertDoctypeHandled(
|
||||
'html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"')
|
||||
|
||||
def test_empty_doctype(self):
|
||||
soup = self.soup("<!DOCTYPE>")
|
||||
doctype = soup.contents[0]
|
||||
self.assertEqual("", doctype.strip())
|
||||
|
||||
def test_public_doctype_with_url(self):
|
||||
doctype = 'html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"'
|
||||
self.assertDoctypeHandled(doctype)
|
||||
|
||||
def test_system_doctype(self):
|
||||
self.assertDoctypeHandled('foo SYSTEM "http://www.example.com/"')
|
||||
|
||||
def test_namespaced_system_doctype(self):
|
||||
# We can handle a namespaced doctype with a system ID.
|
||||
self.assertDoctypeHandled('xsl:stylesheet SYSTEM "htmlent.dtd"')
|
||||
|
||||
def test_namespaced_public_doctype(self):
|
||||
# Test a namespaced doctype with a public id.
|
||||
self.assertDoctypeHandled('xsl:stylesheet PUBLIC "htmlent.dtd"')
|
||||
|
||||
def test_real_xhtml_document(self):
|
||||
"""A real XHTML document should come out more or less the same as it went in."""
|
||||
markup = b"""<?xml version="1.0" encoding="utf-8"?>
|
||||
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN">
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<head><title>Hello.</title></head>
|
||||
<body>Goodbye.</body>
|
||||
</html>"""
|
||||
soup = self.soup(markup)
|
||||
self.assertEqual(
|
||||
soup.encode("utf-8").replace(b"\n", b""),
|
||||
markup.replace(b"\n", b""))
|
||||
|
||||
def test_deepcopy(self):
|
||||
"""Make sure you can copy the tree builder.
|
||||
|
||||
This is important because the builder is part of a
|
||||
BeautifulSoup object, and we want to be able to copy that.
|
||||
"""
|
||||
copy.deepcopy(self.default_builder)
|
||||
|
||||
def test_p_tag_is_never_empty_element(self):
|
||||
"""A <p> tag is never designated as an empty-element tag.
|
||||
|
||||
Even if the markup shows it as an empty-element tag, it
|
||||
shouldn't be presented that way.
|
||||
"""
|
||||
soup = self.soup("<p/>")
|
||||
self.assertFalse(soup.p.is_empty_element)
|
||||
self.assertEqual(str(soup.p), "<p></p>")
|
||||
|
||||
def test_unclosed_tags_get_closed(self):
|
||||
"""A tag that's not closed by the end of the document should be closed.
|
||||
|
||||
This applies to all tags except empty-element tags.
|
||||
"""
|
||||
self.assertSoupEquals("<p>", "<p></p>")
|
||||
self.assertSoupEquals("<b>", "<b></b>")
|
||||
|
||||
self.assertSoupEquals("<br>", "<br/>")
|
||||
|
||||
def test_br_is_always_empty_element_tag(self):
|
||||
"""A <br> tag is designated as an empty-element tag.
|
||||
|
||||
Some parsers treat <br></br> as one <br/> tag, some parsers as
|
||||
two tags, but it should always be an empty-element tag.
|
||||
"""
|
||||
soup = self.soup("<br></br>")
|
||||
self.assertTrue(soup.br.is_empty_element)
|
||||
self.assertEqual(str(soup.br), "<br/>")
|
||||
|
||||
def test_nested_formatting_elements(self):
|
||||
self.assertSoupEquals("<em><em></em></em>")
|
||||
|
||||
def test_comment(self):
|
||||
# Comments are represented as Comment objects.
|
||||
markup = "<p>foo<!--foobar-->baz</p>"
|
||||
self.assertSoupEquals(markup)
|
||||
|
||||
soup = self.soup(markup)
|
||||
comment = soup.find(text="foobar")
|
||||
self.assertEqual(comment.__class__, Comment)
|
||||
|
||||
# The comment is properly integrated into the tree.
|
||||
foo = soup.find(text="foo")
|
||||
self.assertEqual(comment, foo.next_element)
|
||||
baz = soup.find(text="baz")
|
||||
self.assertEqual(comment, baz.previous_element)
|
||||
|
||||
def test_preserved_whitespace_in_pre_and_textarea(self):
|
||||
"""Whitespace must be preserved in <pre> and <textarea> tags."""
|
||||
self.assertSoupEquals("<pre> </pre>")
|
||||
self.assertSoupEquals("<textarea> woo </textarea>")
|
||||
|
||||
def test_nested_inline_elements(self):
|
||||
"""Inline elements can be nested indefinitely."""
|
||||
b_tag = "<b>Inside a B tag</b>"
|
||||
self.assertSoupEquals(b_tag)
|
||||
|
||||
nested_b_tag = "<p>A <i>nested <b>tag</b></i></p>"
|
||||
self.assertSoupEquals(nested_b_tag)
|
||||
|
||||
double_nested_b_tag = "<p>A <a>doubly <i>nested <b>tag</b></i></a></p>"
|
||||
self.assertSoupEquals(nested_b_tag)
|
||||
|
||||
def test_nested_block_level_elements(self):
|
||||
"""Block elements can be nested."""
|
||||
soup = self.soup('<blockquote><p><b>Foo</b></p></blockquote>')
|
||||
blockquote = soup.blockquote
|
||||
self.assertEqual(blockquote.p.b.string, 'Foo')
|
||||
self.assertEqual(blockquote.b.string, 'Foo')
|
||||
|
||||
def test_correctly_nested_tables(self):
|
||||
"""One table can go inside another one."""
|
||||
markup = ('<table id="1">'
|
||||
'<tr>'
|
||||
"<td>Here's another table:"
|
||||
'<table id="2">'
|
||||
'<tr><td>foo</td></tr>'
|
||||
'</table></td>')
|
||||
|
||||
self.assertSoupEquals(
|
||||
markup,
|
||||
'<table id="1"><tr><td>Here\'s another table:'
|
||||
'<table id="2"><tr><td>foo</td></tr></table>'
|
||||
'</td></tr></table>')
|
||||
|
||||
self.assertSoupEquals(
|
||||
"<table><thead><tr><td>Foo</td></tr></thead>"
|
||||
"<tbody><tr><td>Bar</td></tr></tbody>"
|
||||
"<tfoot><tr><td>Baz</td></tr></tfoot></table>")
|
||||
|
||||
def test_deeply_nested_multivalued_attribute(self):
|
||||
# html5lib can set the attributes of the same tag many times
|
||||
# as it rearranges the tree. This has caused problems with
|
||||
# multivalued attributes.
|
||||
markup = '<table><div><div class="css"></div></div></table>'
|
||||
soup = self.soup(markup)
|
||||
self.assertEqual(["css"], soup.div.div['class'])
|
||||
|
||||
def test_angle_brackets_in_attribute_values_are_escaped(self):
|
||||
self.assertSoupEquals('<a b="<a>"></a>', '<a b="<a>"></a>')
|
||||
|
||||
def test_entities_in_attributes_converted_to_unicode(self):
|
||||
expect = u'<p id="pi\N{LATIN SMALL LETTER N WITH TILDE}ata"></p>'
|
||||
self.assertSoupEquals('<p id="piñata"></p>', expect)
|
||||
self.assertSoupEquals('<p id="piñata"></p>', expect)
|
||||
self.assertSoupEquals('<p id="piñata"></p>', expect)
|
||||
self.assertSoupEquals('<p id="piñata"></p>', expect)
|
||||
|
||||
def test_entities_in_text_converted_to_unicode(self):
|
||||
expect = u'<p>pi\N{LATIN SMALL LETTER N WITH TILDE}ata</p>'
|
||||
self.assertSoupEquals("<p>piñata</p>", expect)
|
||||
self.assertSoupEquals("<p>piñata</p>", expect)
|
||||
self.assertSoupEquals("<p>piñata</p>", expect)
|
||||
self.assertSoupEquals("<p>piñata</p>", expect)
|
||||
|
||||
def test_quot_entity_converted_to_quotation_mark(self):
|
||||
self.assertSoupEquals("<p>I said "good day!"</p>",
|
||||
'<p>I said "good day!"</p>')
|
||||
|
||||
def test_out_of_range_entity(self):
|
||||
expect = u"\N{REPLACEMENT CHARACTER}"
|
||||
self.assertSoupEquals("�", expect)
|
||||
self.assertSoupEquals("�", expect)
|
||||
self.assertSoupEquals("�", expect)
|
||||
|
||||
def test_multipart_strings(self):
|
||||
"Mostly to prevent a recurrence of a bug in the html5lib treebuilder."
|
||||
soup = self.soup("<html><h2>\nfoo</h2><p></p></html>")
|
||||
self.assertEqual("p", soup.h2.string.next_element.name)
|
||||
self.assertEqual("p", soup.p.name)
|
||||
|
||||
def test_basic_namespaces(self):
|
||||
"""Parsers don't need to *understand* namespaces, but at the
|
||||
very least they should not choke on namespaces or lose
|
||||
data."""
|
||||
|
||||
markup = b'<html xmlns="http://www.w3.org/1999/xhtml" xmlns:mathml="http://www.w3.org/1998/Math/MathML" xmlns:svg="http://www.w3.org/2000/svg"><head></head><body><mathml:msqrt>4</mathml:msqrt><b svg:fill="red"></b></body></html>'
|
||||
soup = self.soup(markup)
|
||||
self.assertEqual(markup, soup.encode())
|
||||
html = soup.html
|
||||
self.assertEqual('http://www.w3.org/1999/xhtml', soup.html['xmlns'])
|
||||
self.assertEqual(
|
||||
'http://www.w3.org/1998/Math/MathML', soup.html['xmlns:mathml'])
|
||||
self.assertEqual(
|
||||
'http://www.w3.org/2000/svg', soup.html['xmlns:svg'])
|
||||
|
||||
def test_multivalued_attribute_value_becomes_list(self):
|
||||
markup = b'<a class="foo bar">'
|
||||
soup = self.soup(markup)
|
||||
self.assertEqual(['foo', 'bar'], soup.a['class'])
|
||||
|
||||
#
|
||||
# Generally speaking, tests below this point are more tests of
|
||||
# Beautiful Soup than tests of the tree builders. But parsers are
|
||||
# weird, so we run these tests separately for every tree builder
|
||||
# to detect any differences between them.
|
||||
#
|
||||
|
||||
def test_can_parse_unicode_document(self):
|
||||
# A seemingly innocuous document... but it's in Unicode! And
|
||||
# it contains characters that can't be represented in the
|
||||
# encoding found in the declaration! The horror!
|
||||
markup = u'<html><head><meta encoding="euc-jp"></head><body>Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!</body>'
|
||||
soup = self.soup(markup)
|
||||
self.assertEqual(u'Sacr\xe9 bleu!', soup.body.string)
|
||||
|
||||
def test_soupstrainer(self):
|
||||
"""Parsers should be able to work with SoupStrainers."""
|
||||
strainer = SoupStrainer("b")
|
||||
soup = self.soup("A <b>bold</b> <meta/> <i>statement</i>",
|
||||
parse_only=strainer)
|
||||
self.assertEqual(soup.decode(), "<b>bold</b>")
|
||||
|
||||
def test_single_quote_attribute_values_become_double_quotes(self):
|
||||
self.assertSoupEquals("<foo attr='bar'></foo>",
|
||||
'<foo attr="bar"></foo>')
|
||||
|
||||
def test_attribute_values_with_nested_quotes_are_left_alone(self):
|
||||
text = """<foo attr='bar "brawls" happen'>a</foo>"""
|
||||
self.assertSoupEquals(text)
|
||||
|
||||
def test_attribute_values_with_double_nested_quotes_get_quoted(self):
|
||||
text = """<foo attr='bar "brawls" happen'>a</foo>"""
|
||||
soup = self.soup(text)
|
||||
soup.foo['attr'] = 'Brawls happen at "Bob\'s Bar"'
|
||||
self.assertSoupEquals(
|
||||
soup.foo.decode(),
|
||||
"""<foo attr="Brawls happen at "Bob\'s Bar"">a</foo>""")
|
||||
|
||||
def test_ampersand_in_attribute_value_gets_escaped(self):
|
||||
self.assertSoupEquals('<this is="really messed up & stuff"></this>',
|
||||
'<this is="really messed up & stuff"></this>')
|
||||
|
||||
self.assertSoupEquals(
|
||||
'<a href="http://example.org?a=1&b=2;3">foo</a>',
|
||||
'<a href="http://example.org?a=1&b=2;3">foo</a>')
|
||||
|
||||
def test_escaped_ampersand_in_attribute_value_is_left_alone(self):
|
||||
self.assertSoupEquals('<a href="http://example.org?a=1&b=2;3"></a>')
|
||||
|
||||
def test_entities_in_strings_converted_during_parsing(self):
|
||||
# Both XML and HTML entities are converted to Unicode characters
|
||||
# during parsing.
|
||||
text = "<p><<sacré bleu!>></p>"
|
||||
expected = u"<p><<sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!>></p>"
|
||||
self.assertSoupEquals(text, expected)
|
||||
|
||||
def test_smart_quotes_converted_on_the_way_in(self):
|
||||
# Microsoft smart quotes are converted to Unicode characters during
|
||||
# parsing.
|
||||
quote = b"<p>\x91Foo\x92</p>"
|
||||
soup = self.soup(quote)
|
||||
self.assertEqual(
|
||||
soup.p.string,
|
||||
u"\N{LEFT SINGLE QUOTATION MARK}Foo\N{RIGHT SINGLE QUOTATION MARK}")
|
||||
|
||||
def test_non_breaking_spaces_converted_on_the_way_in(self):
|
||||
soup = self.soup("<a> </a>")
|
||||
self.assertEqual(soup.a.string, u"\N{NO-BREAK SPACE}" * 2)
|
||||
|
||||
def test_entities_converted_on_the_way_out(self):
|
||||
text = "<p><<sacré bleu!>></p>"
|
||||
expected = u"<p><<sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!>></p>".encode("utf-8")
|
||||
soup = self.soup(text)
|
||||
self.assertEqual(soup.p.encode("utf-8"), expected)
|
||||
|
||||
def test_real_iso_latin_document(self):
|
||||
# Smoke test of interrelated functionality, using an
|
||||
# easy-to-understand document.
|
||||
|
||||
# Here it is in Unicode. Note that it claims to be in ISO-Latin-1.
|
||||
unicode_html = u'<html><head><meta content="text/html; charset=ISO-Latin-1" http-equiv="Content-type"/></head><body><p>Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!</p></body></html>'
|
||||
|
||||
# That's because we're going to encode it into ISO-Latin-1, and use
|
||||
# that to test.
|
||||
iso_latin_html = unicode_html.encode("iso-8859-1")
|
||||
|
||||
# Parse the ISO-Latin-1 HTML.
|
||||
soup = self.soup(iso_latin_html)
|
||||
# Encode it to UTF-8.
|
||||
result = soup.encode("utf-8")
|
||||
|
||||
# What do we expect the result to look like? Well, it would
|
||||
# look like unicode_html, except that the META tag would say
|
||||
# UTF-8 instead of ISO-Latin-1.
|
||||
expected = unicode_html.replace("ISO-Latin-1", "utf-8")
|
||||
|
||||
# And, of course, it would be in UTF-8, not Unicode.
|
||||
expected = expected.encode("utf-8")
|
||||
|
||||
# Ta-da!
|
||||
self.assertEqual(result, expected)
|
||||
|
||||
def test_real_shift_jis_document(self):
|
||||
# Smoke test to make sure the parser can handle a document in
|
||||
# Shift-JIS encoding, without choking.
|
||||
shift_jis_html = (
|
||||
b'<html><head></head><body><pre>'
|
||||
b'\x82\xb1\x82\xea\x82\xcdShift-JIS\x82\xc5\x83R\x81[\x83f'
|
||||
b'\x83B\x83\x93\x83O\x82\xb3\x82\xea\x82\xbd\x93\xfa\x96{\x8c'
|
||||
b'\xea\x82\xcc\x83t\x83@\x83C\x83\x8b\x82\xc5\x82\xb7\x81B'
|
||||
b'</pre></body></html>')
|
||||
unicode_html = shift_jis_html.decode("shift-jis")
|
||||
soup = self.soup(unicode_html)
|
||||
|
||||
# Make sure the parse tree is correctly encoded to various
|
||||
# encodings.
|
||||
self.assertEqual(soup.encode("utf-8"), unicode_html.encode("utf-8"))
|
||||
self.assertEqual(soup.encode("euc_jp"), unicode_html.encode("euc_jp"))
|
||||
|
||||
def test_real_hebrew_document(self):
|
||||
# A real-world test to make sure we can convert ISO-8859-9 (a
|
||||
# Hebrew encoding) to UTF-8.
|
||||
hebrew_document = b'<html><head><title>Hebrew (ISO 8859-8) in Visual Directionality</title></head><body><h1>Hebrew (ISO 8859-8) in Visual Directionality</h1>\xed\xe5\xec\xf9</body></html>'
|
||||
soup = self.soup(
|
||||
hebrew_document, from_encoding="iso8859-8")
|
||||
self.assertEqual(soup.original_encoding, 'iso8859-8')
|
||||
self.assertEqual(
|
||||
soup.encode('utf-8'),
|
||||
hebrew_document.decode("iso8859-8").encode("utf-8"))
|
||||
|
||||
def test_meta_tag_reflects_current_encoding(self):
|
||||
# Here's the <meta> tag saying that a document is
|
||||
# encoded in Shift-JIS.
|
||||
meta_tag = ('<meta content="text/html; charset=x-sjis" '
|
||||
'http-equiv="Content-type"/>')
|
||||
|
||||
# Here's a document incorporating that meta tag.
|
||||
shift_jis_html = (
|
||||
'<html><head>\n%s\n'
|
||||
'<meta http-equiv="Content-language" content="ja"/>'
|
||||
'</head><body>Shift-JIS markup goes here.') % meta_tag
|
||||
soup = self.soup(shift_jis_html)
|
||||
|
||||
# Parse the document, and the charset is seemingly unaffected.
|
||||
parsed_meta = soup.find('meta', {'http-equiv': 'Content-type'})
|
||||
content = parsed_meta['content']
|
||||
self.assertEqual('text/html; charset=x-sjis', content)
|
||||
|
||||
# But that value is actually a ContentMetaAttributeValue object.
|
||||
self.assertTrue(isinstance(content, ContentMetaAttributeValue))
|
||||
|
||||
# And it will take on a value that reflects its current
|
||||
# encoding.
|
||||
self.assertEqual('text/html; charset=utf8', content.encode("utf8"))
|
||||
|
||||
# For the rest of the story, see TestSubstitutions in
|
||||
# test_tree.py.
|
||||
|
||||
def test_html5_style_meta_tag_reflects_current_encoding(self):
|
||||
# Here's the <meta> tag saying that a document is
|
||||
# encoded in Shift-JIS.
|
||||
meta_tag = ('<meta id="encoding" charset="x-sjis" />')
|
||||
|
||||
# Here's a document incorporating that meta tag.
|
||||
shift_jis_html = (
|
||||
'<html><head>\n%s\n'
|
||||
'<meta http-equiv="Content-language" content="ja"/>'
|
||||
'</head><body>Shift-JIS markup goes here.') % meta_tag
|
||||
soup = self.soup(shift_jis_html)
|
||||
|
||||
# Parse the document, and the charset is seemingly unaffected.
|
||||
parsed_meta = soup.find('meta', id="encoding")
|
||||
charset = parsed_meta['charset']
|
||||
self.assertEqual('x-sjis', charset)
|
||||
|
||||
# But that value is actually a CharsetMetaAttributeValue object.
|
||||
self.assertTrue(isinstance(charset, CharsetMetaAttributeValue))
|
||||
|
||||
# And it will take on a value that reflects its current
|
||||
# encoding.
|
||||
self.assertEqual('utf8', charset.encode("utf8"))
|
||||
|
||||
def test_tag_with_no_attributes_can_have_attributes_added(self):
|
||||
data = self.soup("<a>text</a>")
|
||||
data.a['foo'] = 'bar'
|
||||
self.assertEqual('<a foo="bar">text</a>', data.a.decode())
|
||||
|
||||
class XMLTreeBuilderSmokeTest(object):
|
||||
|
||||
def test_docstring_generated(self):
|
||||
soup = self.soup("<root/>")
|
||||
self.assertEqual(
|
||||
soup.encode(), b'<?xml version="1.0" encoding="utf-8"?>\n<root/>')
|
||||
|
||||
def test_real_xhtml_document(self):
|
||||
"""A real XHTML document should come out *exactly* the same as it went in."""
|
||||
markup = b"""<?xml version="1.0" encoding="utf-8"?>
|
||||
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN">
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<head><title>Hello.</title></head>
|
||||
<body>Goodbye.</body>
|
||||
</html>"""
|
||||
soup = self.soup(markup)
|
||||
self.assertEqual(
|
||||
soup.encode("utf-8"), markup)
|
||||
|
||||
def test_formatter_processes_script_tag_for_xml_documents(self):
|
||||
doc = """
|
||||
<script type="text/javascript">
|
||||
</script>
|
||||
"""
|
||||
soup = BeautifulSoup(doc, "xml")
|
||||
# lxml would have stripped this while parsing, but we can add
|
||||
# it later.
|
||||
soup.script.string = 'console.log("< < hey > > ");'
|
||||
encoded = soup.encode()
|
||||
self.assertTrue(b"< < hey > >" in encoded)
|
||||
|
||||
def test_can_parse_unicode_document(self):
|
||||
markup = u'<?xml version="1.0" encoding="euc-jp"><root>Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!</root>'
|
||||
soup = self.soup(markup)
|
||||
self.assertEqual(u'Sacr\xe9 bleu!', soup.root.string)
|
||||
|
||||
def test_popping_namespaced_tag(self):
|
||||
markup = '<rss xmlns:dc="foo"><dc:creator>b</dc:creator><dc:date>2012-07-02T20:33:42Z</dc:date><dc:rights>c</dc:rights></rss>'
|
||||
soup = self.soup(markup)
|
||||
self.assertEqual(
|
||||
unicode(soup.rss), markup)
|
||||
|
||||
def test_docstring_includes_correct_encoding(self):
|
||||
soup = self.soup("<root/>")
|
||||
self.assertEqual(
|
||||
soup.encode("latin1"),
|
||||
b'<?xml version="1.0" encoding="latin1"?>\n<root/>')
|
||||
|
||||
def test_large_xml_document(self):
|
||||
"""A large XML document should come out the same as it went in."""
|
||||
markup = (b'<?xml version="1.0" encoding="utf-8"?>\n<root>'
|
||||
+ b'0' * (2**12)
|
||||
+ b'</root>')
|
||||
soup = self.soup(markup)
|
||||
self.assertEqual(soup.encode("utf-8"), markup)
|
||||
|
||||
|
||||
def test_tags_are_empty_element_if_and_only_if_they_are_empty(self):
|
||||
self.assertSoupEquals("<p>", "<p/>")
|
||||
self.assertSoupEquals("<p>foo</p>")
|
||||
|
||||
def test_namespaces_are_preserved(self):
|
||||
markup = '<root xmlns:a="http://example.com/" xmlns:b="http://example.net/"><a:foo>This tag is in the a namespace</a:foo><b:foo>This tag is in the b namespace</b:foo></root>'
|
||||
soup = self.soup(markup)
|
||||
root = soup.root
|
||||
self.assertEqual("http://example.com/", root['xmlns:a'])
|
||||
self.assertEqual("http://example.net/", root['xmlns:b'])
|
||||
|
||||
def test_closing_namespaced_tag(self):
|
||||
markup = '<p xmlns:dc="http://purl.org/dc/elements/1.1/"><dc:date>20010504</dc:date></p>'
|
||||
soup = self.soup(markup)
|
||||
self.assertEqual(unicode(soup.p), markup)
|
||||
|
||||
def test_namespaced_attributes(self):
|
||||
markup = '<foo xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><bar xsi:schemaLocation="http://www.example.com"/></foo>'
|
||||
soup = self.soup(markup)
|
||||
self.assertEqual(unicode(soup.foo), markup)
|
||||
|
||||
def test_namespaced_attributes_xml_namespace(self):
|
||||
markup = '<foo xml:lang="fr">bar</foo>'
|
||||
soup = self.soup(markup)
|
||||
self.assertEqual(unicode(soup.foo), markup)
|
||||
|
||||
class HTML5TreeBuilderSmokeTest(HTMLTreeBuilderSmokeTest):
|
||||
"""Smoke test for a tree builder that supports HTML5."""
|
||||
|
||||
def test_real_xhtml_document(self):
|
||||
# Since XHTML is not HTML5, HTML5 parsers are not tested to handle
|
||||
# XHTML documents in any particular way.
|
||||
pass
|
||||
|
||||
def test_html_tags_have_namespace(self):
|
||||
markup = "<a>"
|
||||
soup = self.soup(markup)
|
||||
self.assertEqual("http://www.w3.org/1999/xhtml", soup.a.namespace)
|
||||
|
||||
def test_svg_tags_have_namespace(self):
|
||||
markup = '<svg><circle/></svg>'
|
||||
soup = self.soup(markup)
|
||||
namespace = "http://www.w3.org/2000/svg"
|
||||
self.assertEqual(namespace, soup.svg.namespace)
|
||||
self.assertEqual(namespace, soup.circle.namespace)
|
||||
|
||||
|
||||
def test_mathml_tags_have_namespace(self):
|
||||
markup = '<math><msqrt>5</msqrt></math>'
|
||||
soup = self.soup(markup)
|
||||
namespace = 'http://www.w3.org/1998/Math/MathML'
|
||||
self.assertEqual(namespace, soup.math.namespace)
|
||||
self.assertEqual(namespace, soup.msqrt.namespace)
|
||||
|
||||
def test_xml_declaration_becomes_comment(self):
|
||||
markup = '<?xml version="1.0" encoding="utf-8"?><html></html>'
|
||||
soup = self.soup(markup)
|
||||
self.assertTrue(isinstance(soup.contents[0], Comment))
|
||||
self.assertEqual(soup.contents[0], '?xml version="1.0" encoding="utf-8"?')
|
||||
self.assertEqual("html", soup.contents[0].next_element.name)
|
||||
|
||||
def skipIf(condition, reason):
|
||||
def nothing(test, *args, **kwargs):
|
||||
return None
|
||||
|
||||
def decorator(test_item):
|
||||
if condition:
|
||||
return nothing
|
||||
else:
|
||||
return test_item
|
||||
|
||||
return decorator
|
|
@ -1 +0,0 @@
|
|||
"The beautifulsoup tests."
|
|
@ -1,141 +0,0 @@
|
|||
"""Tests of the builder registry."""
|
||||
|
||||
import unittest
|
||||
|
||||
from bs4 import BeautifulSoup
|
||||
from bs4.builder import (
|
||||
builder_registry as registry,
|
||||
HTMLParserTreeBuilder,
|
||||
TreeBuilderRegistry,
|
||||
)
|
||||
|
||||
try:
|
||||
from bs4.builder import HTML5TreeBuilder
|
||||
HTML5LIB_PRESENT = True
|
||||
except ImportError:
|
||||
HTML5LIB_PRESENT = False
|
||||
|
||||
try:
|
||||
from bs4.builder import (
|
||||
LXMLTreeBuilderForXML,
|
||||
LXMLTreeBuilder,
|
||||
)
|
||||
LXML_PRESENT = True
|
||||
except ImportError:
|
||||
LXML_PRESENT = False
|
||||
|
||||
|
||||
class BuiltInRegistryTest(unittest.TestCase):
|
||||
"""Test the built-in registry with the default builders registered."""
|
||||
|
||||
def test_combination(self):
|
||||
if LXML_PRESENT:
|
||||
self.assertEqual(registry.lookup('fast', 'html'),
|
||||
LXMLTreeBuilder)
|
||||
|
||||
if LXML_PRESENT:
|
||||
self.assertEqual(registry.lookup('permissive', 'xml'),
|
||||
LXMLTreeBuilderForXML)
|
||||
self.assertEqual(registry.lookup('strict', 'html'),
|
||||
HTMLParserTreeBuilder)
|
||||
if HTML5LIB_PRESENT:
|
||||
self.assertEqual(registry.lookup('html5lib', 'html'),
|
||||
HTML5TreeBuilder)
|
||||
|
||||
def test_lookup_by_markup_type(self):
|
||||
if LXML_PRESENT:
|
||||
self.assertEqual(registry.lookup('html'), LXMLTreeBuilder)
|
||||
self.assertEqual(registry.lookup('xml'), LXMLTreeBuilderForXML)
|
||||
else:
|
||||
self.assertEqual(registry.lookup('xml'), None)
|
||||
if HTML5LIB_PRESENT:
|
||||
self.assertEqual(registry.lookup('html'), HTML5TreeBuilder)
|
||||
else:
|
||||
self.assertEqual(registry.lookup('html'), HTMLParserTreeBuilder)
|
||||
|
||||
def test_named_library(self):
|
||||
if LXML_PRESENT:
|
||||
self.assertEqual(registry.lookup('lxml', 'xml'),
|
||||
LXMLTreeBuilderForXML)
|
||||
self.assertEqual(registry.lookup('lxml', 'html'),
|
||||
LXMLTreeBuilder)
|
||||
if HTML5LIB_PRESENT:
|
||||
self.assertEqual(registry.lookup('html5lib'),
|
||||
HTML5TreeBuilder)
|
||||
|
||||
self.assertEqual(registry.lookup('html.parser'),
|
||||
HTMLParserTreeBuilder)
|
||||
|
||||
def test_beautifulsoup_constructor_does_lookup(self):
|
||||
# You can pass in a string.
|
||||
BeautifulSoup("", features="html")
|
||||
# Or a list of strings.
|
||||
BeautifulSoup("", features=["html", "fast"])
|
||||
|
||||
# You'll get an exception if BS can't find an appropriate
|
||||
# builder.
|
||||
self.assertRaises(ValueError, BeautifulSoup,
|
||||
"", features="no-such-feature")
|
||||
|
||||
class RegistryTest(unittest.TestCase):
|
||||
"""Test the TreeBuilderRegistry class in general."""
|
||||
|
||||
def setUp(self):
|
||||
self.registry = TreeBuilderRegistry()
|
||||
|
||||
def builder_for_features(self, *feature_list):
|
||||
cls = type('Builder_' + '_'.join(feature_list),
|
||||
(object,), {'features' : feature_list})
|
||||
|
||||
self.registry.register(cls)
|
||||
return cls
|
||||
|
||||
def test_register_with_no_features(self):
|
||||
builder = self.builder_for_features()
|
||||
|
||||
# Since the builder advertises no features, you can't find it
|
||||
# by looking up features.
|
||||
self.assertEqual(self.registry.lookup('foo'), None)
|
||||
|
||||
# But you can find it by doing a lookup with no features, if
|
||||
# this happens to be the only registered builder.
|
||||
self.assertEqual(self.registry.lookup(), builder)
|
||||
|
||||
def test_register_with_features_makes_lookup_succeed(self):
|
||||
builder = self.builder_for_features('foo', 'bar')
|
||||
self.assertEqual(self.registry.lookup('foo'), builder)
|
||||
self.assertEqual(self.registry.lookup('bar'), builder)
|
||||
|
||||
def test_lookup_fails_when_no_builder_implements_feature(self):
|
||||
builder = self.builder_for_features('foo', 'bar')
|
||||
self.assertEqual(self.registry.lookup('baz'), None)
|
||||
|
||||
def test_lookup_gets_most_recent_registration_when_no_feature_specified(self):
|
||||
builder1 = self.builder_for_features('foo')
|
||||
builder2 = self.builder_for_features('bar')
|
||||
self.assertEqual(self.registry.lookup(), builder2)
|
||||
|
||||
def test_lookup_fails_when_no_tree_builders_registered(self):
|
||||
self.assertEqual(self.registry.lookup(), None)
|
||||
|
||||
def test_lookup_gets_most_recent_builder_supporting_all_features(self):
|
||||
has_one = self.builder_for_features('foo')
|
||||
has_the_other = self.builder_for_features('bar')
|
||||
has_both_early = self.builder_for_features('foo', 'bar', 'baz')
|
||||
has_both_late = self.builder_for_features('foo', 'bar', 'quux')
|
||||
lacks_one = self.builder_for_features('bar')
|
||||
has_the_other = self.builder_for_features('foo')
|
||||
|
||||
# There are two builders featuring 'foo' and 'bar', but
|
||||
# the one that also features 'quux' was registered later.
|
||||
self.assertEqual(self.registry.lookup('foo', 'bar'),
|
||||
has_both_late)
|
||||
|
||||
# There is only one builder featuring 'foo', 'bar', and 'baz'.
|
||||
self.assertEqual(self.registry.lookup('foo', 'bar', 'baz'),
|
||||
has_both_early)
|
||||
|
||||
def test_lookup_fails_when_cannot_reconcile_requested_features(self):
|
||||
builder1 = self.builder_for_features('foo', 'bar')
|
||||
builder2 = self.builder_for_features('foo', 'baz')
|
||||
self.assertEqual(self.registry.lookup('bar', 'baz'), None)
|
|
@ -1,36 +0,0 @@
|
|||
"Test harness for doctests."
|
||||
|
||||
# pylint: disable-msg=E0611,W0142
|
||||
|
||||
__metaclass__ = type
|
||||
__all__ = [
|
||||
'additional_tests',
|
||||
]
|
||||
|
||||
import atexit
|
||||
import doctest
|
||||
import os
|
||||
#from pkg_resources import (
|
||||
# resource_filename, resource_exists, resource_listdir, cleanup_resources)
|
||||
import unittest
|
||||
|
||||
DOCTEST_FLAGS = (
|
||||
doctest.ELLIPSIS |
|
||||
doctest.NORMALIZE_WHITESPACE |
|
||||
doctest.REPORT_NDIFF)
|
||||
|
||||
|
||||
# def additional_tests():
|
||||
# "Run the doc tests (README.txt and docs/*, if any exist)"
|
||||
# doctest_files = [
|
||||
# os.path.abspath(resource_filename('bs4', 'README.txt'))]
|
||||
# if resource_exists('bs4', 'docs'):
|
||||
# for name in resource_listdir('bs4', 'docs'):
|
||||
# if name.endswith('.txt'):
|
||||
# doctest_files.append(
|
||||
# os.path.abspath(
|
||||
# resource_filename('bs4', 'docs/%s' % name)))
|
||||
# kwargs = dict(module_relative=False, optionflags=DOCTEST_FLAGS)
|
||||
# atexit.register(cleanup_resources)
|
||||
# return unittest.TestSuite((
|
||||
# doctest.DocFileSuite(*doctest_files, **kwargs)))
|
|
@ -1,85 +0,0 @@
|
|||
"""Tests to ensure that the html5lib tree builder generates good trees."""
|
||||
|
||||
import warnings
|
||||
|
||||
try:
|
||||
from bs4.builder import HTML5TreeBuilder
|
||||
HTML5LIB_PRESENT = True
|
||||
except ImportError, e:
|
||||
HTML5LIB_PRESENT = False
|
||||
from bs4.element import SoupStrainer
|
||||
from bs4.testing import (
|
||||
HTML5TreeBuilderSmokeTest,
|
||||
SoupTest,
|
||||
skipIf,
|
||||
)
|
||||
|
||||
@skipIf(
|
||||
not HTML5LIB_PRESENT,
|
||||
"html5lib seems not to be present, not testing its tree builder.")
|
||||
class HTML5LibBuilderSmokeTest(SoupTest, HTML5TreeBuilderSmokeTest):
|
||||
"""See ``HTML5TreeBuilderSmokeTest``."""
|
||||
|
||||
@property
|
||||
def default_builder(self):
|
||||
return HTML5TreeBuilder()
|
||||
|
||||
def test_soupstrainer(self):
|
||||
# The html5lib tree builder does not support SoupStrainers.
|
||||
strainer = SoupStrainer("b")
|
||||
markup = "<p>A <b>bold</b> statement.</p>"
|
||||
with warnings.catch_warnings(record=True) as w:
|
||||
soup = self.soup(markup, parse_only=strainer)
|
||||
self.assertEqual(
|
||||
soup.decode(), self.document_for(markup))
|
||||
|
||||
self.assertTrue(
|
||||
"the html5lib tree builder doesn't support parse_only" in
|
||||
str(w[0].message))
|
||||
|
||||
def test_correctly_nested_tables(self):
|
||||
"""html5lib inserts <tbody> tags where other parsers don't."""
|
||||
markup = ('<table id="1">'
|
||||
'<tr>'
|
||||
"<td>Here's another table:"
|
||||
'<table id="2">'
|
||||
'<tr><td>foo</td></tr>'
|
||||
'</table></td>')
|
||||
|
||||
self.assertSoupEquals(
|
||||
markup,
|
||||
'<table id="1"><tbody><tr><td>Here\'s another table:'
|
||||
'<table id="2"><tbody><tr><td>foo</td></tr></tbody></table>'
|
||||
'</td></tr></tbody></table>')
|
||||
|
||||
self.assertSoupEquals(
|
||||
"<table><thead><tr><td>Foo</td></tr></thead>"
|
||||
"<tbody><tr><td>Bar</td></tr></tbody>"
|
||||
"<tfoot><tr><td>Baz</td></tr></tfoot></table>")
|
||||
|
||||
def test_xml_declaration_followed_by_doctype(self):
|
||||
markup = '''<?xml version="1.0" encoding="utf-8"?>
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
</head>
|
||||
<body>
|
||||
<p>foo</p>
|
||||
</body>
|
||||
</html>'''
|
||||
soup = self.soup(markup)
|
||||
# Verify that we can reach the <p> tag; this means the tree is connected.
|
||||
self.assertEqual(b"<p>foo</p>", soup.p.encode())
|
||||
|
||||
def test_reparented_markup(self):
|
||||
markup = '<p><em>foo</p>\n<p>bar<a></a></em></p>'
|
||||
soup = self.soup(markup)
|
||||
self.assertEqual(u"<body><p><em>foo</em></p><em>\n</em><p><em>bar<a></a></em></p></body>", soup.body.decode())
|
||||
self.assertEqual(2, len(soup.find_all('p')))
|
||||
|
||||
|
||||
def test_reparented_markup_ends_with_whitespace(self):
|
||||
markup = '<p><em>foo</p>\n<p>bar<a></a></em></p>\n'
|
||||
soup = self.soup(markup)
|
||||
self.assertEqual(u"<body><p><em>foo</em></p><em>\n</em><p><em>bar<a></a></em></p>\n</body>", soup.body.decode())
|
||||
self.assertEqual(2, len(soup.find_all('p')))
|
|
@ -1,19 +0,0 @@
|
|||
"""Tests to ensure that the html.parser tree builder generates good
|
||||
trees."""
|
||||
|
||||
from bs4.testing import SoupTest, HTMLTreeBuilderSmokeTest
|
||||
from bs4.builder import HTMLParserTreeBuilder
|
||||
|
||||
class HTMLParserTreeBuilderSmokeTest(SoupTest, HTMLTreeBuilderSmokeTest):
|
||||
|
||||
@property
|
||||
def default_builder(self):
|
||||
return HTMLParserTreeBuilder()
|
||||
|
||||
def test_namespaced_system_doctype(self):
|
||||
# html.parser can't handle namespaced doctypes, so skip this one.
|
||||
pass
|
||||
|
||||
def test_namespaced_public_doctype(self):
|
||||
# html.parser can't handle namespaced doctypes, so skip this one.
|
||||
pass
|
|
@ -1,91 +0,0 @@
|
|||
"""Tests to ensure that the lxml tree builder generates good trees."""
|
||||
|
||||
import re
|
||||
import warnings
|
||||
|
||||
try:
|
||||
import lxml.etree
|
||||
LXML_PRESENT = True
|
||||
LXML_VERSION = lxml.etree.LXML_VERSION
|
||||
except ImportError, e:
|
||||
LXML_PRESENT = False
|
||||
LXML_VERSION = (0,)
|
||||
|
||||
if LXML_PRESENT:
|
||||
from bs4.builder import LXMLTreeBuilder, LXMLTreeBuilderForXML
|
||||
|
||||
from bs4 import (
|
||||
BeautifulSoup,
|
||||
BeautifulStoneSoup,
|
||||
)
|
||||
from bs4.element import Comment, Doctype, SoupStrainer
|
||||
from bs4.testing import skipIf
|
||||
from bs4.tests import test_htmlparser
|
||||
from bs4.testing import (
|
||||
HTMLTreeBuilderSmokeTest,
|
||||
XMLTreeBuilderSmokeTest,
|
||||
SoupTest,
|
||||
skipIf,
|
||||
)
|
||||
|
||||
@skipIf(
|
||||
not LXML_PRESENT,
|
||||
"lxml seems not to be present, not testing its tree builder.")
|
||||
class LXMLTreeBuilderSmokeTest(SoupTest, HTMLTreeBuilderSmokeTest):
|
||||
"""See ``HTMLTreeBuilderSmokeTest``."""
|
||||
|
||||
@property
|
||||
def default_builder(self):
|
||||
return LXMLTreeBuilder()
|
||||
|
||||
def test_out_of_range_entity(self):
|
||||
self.assertSoupEquals(
|
||||
"<p>foo�bar</p>", "<p>foobar</p>")
|
||||
self.assertSoupEquals(
|
||||
"<p>foo�bar</p>", "<p>foobar</p>")
|
||||
self.assertSoupEquals(
|
||||
"<p>foo�bar</p>", "<p>foobar</p>")
|
||||
|
||||
# In lxml < 2.3.5, an empty doctype causes a segfault. Skip this
|
||||
# test if an old version of lxml is installed.
|
||||
|
||||
@skipIf(
|
||||
not LXML_PRESENT or LXML_VERSION < (2,3,5,0),
|
||||
"Skipping doctype test for old version of lxml to avoid segfault.")
|
||||
def test_empty_doctype(self):
|
||||
soup = self.soup("<!DOCTYPE>")
|
||||
doctype = soup.contents[0]
|
||||
self.assertEqual("", doctype.strip())
|
||||
|
||||
def test_beautifulstonesoup_is_xml_parser(self):
|
||||
# Make sure that the deprecated BSS class uses an xml builder
|
||||
# if one is installed.
|
||||
with warnings.catch_warnings(record=True) as w:
|
||||
soup = BeautifulStoneSoup("<b />")
|
||||
self.assertEqual(u"<b/>", unicode(soup.b))
|
||||
self.assertTrue("BeautifulStoneSoup class is deprecated" in str(w[0].message))
|
||||
|
||||
def test_real_xhtml_document(self):
|
||||
"""lxml strips the XML definition from an XHTML doc, which is fine."""
|
||||
markup = b"""<?xml version="1.0" encoding="utf-8"?>
|
||||
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN">
|
||||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||||
<head><title>Hello.</title></head>
|
||||
<body>Goodbye.</body>
|
||||
</html>"""
|
||||
soup = self.soup(markup)
|
||||
self.assertEqual(
|
||||
soup.encode("utf-8").replace(b"\n", b''),
|
||||
markup.replace(b'\n', b'').replace(
|
||||
b'<?xml version="1.0" encoding="utf-8"?>', b''))
|
||||
|
||||
|
||||
@skipIf(
|
||||
not LXML_PRESENT,
|
||||
"lxml seems not to be present, not testing its XML tree builder.")
|
||||
class LXMLXMLTreeBuilderSmokeTest(SoupTest, XMLTreeBuilderSmokeTest):
|
||||
"""See ``HTMLTreeBuilderSmokeTest``."""
|
||||
|
||||
@property
|
||||
def default_builder(self):
|
||||
return LXMLTreeBuilderForXML()
|
|
@ -1,434 +0,0 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Tests of Beautiful Soup as a whole."""
|
||||
|
||||
import logging
|
||||
import unittest
|
||||
import sys
|
||||
import tempfile
|
||||
|
||||
from bs4 import (
|
||||
BeautifulSoup,
|
||||
BeautifulStoneSoup,
|
||||
)
|
||||
from bs4.element import (
|
||||
CharsetMetaAttributeValue,
|
||||
ContentMetaAttributeValue,
|
||||
SoupStrainer,
|
||||
NamespacedAttribute,
|
||||
)
|
||||
import bs4.dammit
|
||||
from bs4.dammit import (
|
||||
EntitySubstitution,
|
||||
UnicodeDammit,
|
||||
)
|
||||
from bs4.testing import (
|
||||
SoupTest,
|
||||
skipIf,
|
||||
)
|
||||
import warnings
|
||||
|
||||
try:
|
||||
from bs4.builder import LXMLTreeBuilder, LXMLTreeBuilderForXML
|
||||
LXML_PRESENT = True
|
||||
except ImportError, e:
|
||||
LXML_PRESENT = False
|
||||
|
||||
PYTHON_2_PRE_2_7 = (sys.version_info < (2,7))
|
||||
PYTHON_3_PRE_3_2 = (sys.version_info[0] == 3 and sys.version_info < (3,2))
|
||||
|
||||
class TestConstructor(SoupTest):
|
||||
|
||||
def test_short_unicode_input(self):
|
||||
data = u"<h1>éé</h1>"
|
||||
soup = self.soup(data)
|
||||
self.assertEqual(u"éé", soup.h1.string)
|
||||
|
||||
def test_embedded_null(self):
|
||||
data = u"<h1>foo\0bar</h1>"
|
||||
soup = self.soup(data)
|
||||
self.assertEqual(u"foo\0bar", soup.h1.string)
|
||||
|
||||
|
||||
class TestDeprecatedConstructorArguments(SoupTest):
|
||||
|
||||
def test_parseOnlyThese_renamed_to_parse_only(self):
|
||||
with warnings.catch_warnings(record=True) as w:
|
||||
soup = self.soup("<a><b></b></a>", parseOnlyThese=SoupStrainer("b"))
|
||||
msg = str(w[0].message)
|
||||
self.assertTrue("parseOnlyThese" in msg)
|
||||
self.assertTrue("parse_only" in msg)
|
||||
self.assertEqual(b"<b></b>", soup.encode())
|
||||
|
||||
def test_fromEncoding_renamed_to_from_encoding(self):
|
||||
with warnings.catch_warnings(record=True) as w:
|
||||
utf8 = b"\xc3\xa9"
|
||||
soup = self.soup(utf8, fromEncoding="utf8")
|
||||
msg = str(w[0].message)
|
||||
self.assertTrue("fromEncoding" in msg)
|
||||
self.assertTrue("from_encoding" in msg)
|
||||
self.assertEqual("utf8", soup.original_encoding)
|
||||
|
||||
def test_unrecognized_keyword_argument(self):
|
||||
self.assertRaises(
|
||||
TypeError, self.soup, "<a>", no_such_argument=True)
|
||||
|
||||
class TestWarnings(SoupTest):
|
||||
|
||||
def test_disk_file_warning(self):
|
||||
filehandle = tempfile.NamedTemporaryFile()
|
||||
filename = filehandle.name
|
||||
try:
|
||||
with warnings.catch_warnings(record=True) as w:
|
||||
soup = self.soup(filename)
|
||||
msg = str(w[0].message)
|
||||
self.assertTrue("looks like a filename" in msg)
|
||||
finally:
|
||||
filehandle.close()
|
||||
|
||||
# The file no longer exists, so Beautiful Soup will no longer issue the warning.
|
||||
with warnings.catch_warnings(record=True) as w:
|
||||
soup = self.soup(filename)
|
||||
self.assertEqual(0, len(w))
|
||||
|
||||
def test_url_warning(self):
|
||||
with warnings.catch_warnings(record=True) as w:
|
||||
soup = self.soup("http://www.crummy.com/")
|
||||
msg = str(w[0].message)
|
||||
self.assertTrue("looks like a URL" in msg)
|
||||
|
||||
with warnings.catch_warnings(record=True) as w:
|
||||
soup = self.soup("http://www.crummy.com/ is great")
|
||||
self.assertEqual(0, len(w))
|
||||
|
||||
class TestSelectiveParsing(SoupTest):
|
||||
|
||||
def test_parse_with_soupstrainer(self):
|
||||
markup = "No<b>Yes</b><a>No<b>Yes <c>Yes</c></b>"
|
||||
strainer = SoupStrainer("b")
|
||||
soup = self.soup(markup, parse_only=strainer)
|
||||
self.assertEqual(soup.encode(), b"<b>Yes</b><b>Yes <c>Yes</c></b>")
|
||||
|
||||
|
||||
class TestEntitySubstitution(unittest.TestCase):
|
||||
"""Standalone tests of the EntitySubstitution class."""
|
||||
def setUp(self):
|
||||
self.sub = EntitySubstitution
|
||||
|
||||
def test_simple_html_substitution(self):
|
||||
# Unicode characters corresponding to named HTML entites
|
||||
# are substituted, and no others.
|
||||
s = u"foo\u2200\N{SNOWMAN}\u00f5bar"
|
||||
self.assertEqual(self.sub.substitute_html(s),
|
||||
u"foo∀\N{SNOWMAN}õbar")
|
||||
|
||||
def test_smart_quote_substitution(self):
|
||||
# MS smart quotes are a common source of frustration, so we
|
||||
# give them a special test.
|
||||
quotes = b"\x91\x92foo\x93\x94"
|
||||
dammit = UnicodeDammit(quotes)
|
||||
self.assertEqual(self.sub.substitute_html(dammit.markup),
|
||||
"‘’foo“”")
|
||||
|
||||
def test_xml_converstion_includes_no_quotes_if_make_quoted_attribute_is_false(self):
|
||||
s = 'Welcome to "my bar"'
|
||||
self.assertEqual(self.sub.substitute_xml(s, False), s)
|
||||
|
||||
def test_xml_attribute_quoting_normally_uses_double_quotes(self):
|
||||
self.assertEqual(self.sub.substitute_xml("Welcome", True),
|
||||
'"Welcome"')
|
||||
self.assertEqual(self.sub.substitute_xml("Bob's Bar", True),
|
||||
'"Bob\'s Bar"')
|
||||
|
||||
def test_xml_attribute_quoting_uses_single_quotes_when_value_contains_double_quotes(self):
|
||||
s = 'Welcome to "my bar"'
|
||||
self.assertEqual(self.sub.substitute_xml(s, True),
|
||||
"'Welcome to \"my bar\"'")
|
||||
|
||||
def test_xml_attribute_quoting_escapes_single_quotes_when_value_contains_both_single_and_double_quotes(self):
|
||||
s = 'Welcome to "Bob\'s Bar"'
|
||||
self.assertEqual(
|
||||
self.sub.substitute_xml(s, True),
|
||||
'"Welcome to "Bob\'s Bar""')
|
||||
|
||||
def test_xml_quotes_arent_escaped_when_value_is_not_being_quoted(self):
|
||||
quoted = 'Welcome to "Bob\'s Bar"'
|
||||
self.assertEqual(self.sub.substitute_xml(quoted), quoted)
|
||||
|
||||
def test_xml_quoting_handles_angle_brackets(self):
|
||||
self.assertEqual(
|
||||
self.sub.substitute_xml("foo<bar>"),
|
||||
"foo<bar>")
|
||||
|
||||
def test_xml_quoting_handles_ampersands(self):
|
||||
self.assertEqual(self.sub.substitute_xml("AT&T"), "AT&T")
|
||||
|
||||
def test_xml_quoting_including_ampersands_when_they_are_part_of_an_entity(self):
|
||||
self.assertEqual(
|
||||
self.sub.substitute_xml("ÁT&T"),
|
||||
"&Aacute;T&T")
|
||||
|
||||
def test_xml_quoting_ignoring_ampersands_when_they_are_part_of_an_entity(self):
|
||||
self.assertEqual(
|
||||
self.sub.substitute_xml_containing_entities("ÁT&T"),
|
||||
"ÁT&T")
|
||||
|
||||
def test_quotes_not_html_substituted(self):
|
||||
"""There's no need to do this except inside attribute values."""
|
||||
text = 'Bob\'s "bar"'
|
||||
self.assertEqual(self.sub.substitute_html(text), text)
|
||||
|
||||
|
||||
class TestEncodingConversion(SoupTest):
|
||||
# Test Beautiful Soup's ability to decode and encode from various
|
||||
# encodings.
|
||||
|
||||
def setUp(self):
|
||||
super(TestEncodingConversion, self).setUp()
|
||||
self.unicode_data = u'<html><head><meta charset="utf-8"/></head><body><foo>Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!</foo></body></html>'
|
||||
self.utf8_data = self.unicode_data.encode("utf-8")
|
||||
# Just so you know what it looks like.
|
||||
self.assertEqual(
|
||||
self.utf8_data,
|
||||
b'<html><head><meta charset="utf-8"/></head><body><foo>Sacr\xc3\xa9 bleu!</foo></body></html>')
|
||||
|
||||
def test_ascii_in_unicode_out(self):
|
||||
# ASCII input is converted to Unicode. The original_encoding
|
||||
# attribute is set to 'utf-8', a superset of ASCII.
|
||||
chardet = bs4.dammit.chardet_dammit
|
||||
logging.disable(logging.WARNING)
|
||||
try:
|
||||
def noop(str):
|
||||
return None
|
||||
# Disable chardet, which will realize that the ASCII is ASCII.
|
||||
bs4.dammit.chardet_dammit = noop
|
||||
ascii = b"<foo>a</foo>"
|
||||
soup_from_ascii = self.soup(ascii)
|
||||
unicode_output = soup_from_ascii.decode()
|
||||
self.assertTrue(isinstance(unicode_output, unicode))
|
||||
self.assertEqual(unicode_output, self.document_for(ascii.decode()))
|
||||
self.assertEqual(soup_from_ascii.original_encoding.lower(), "utf-8")
|
||||
finally:
|
||||
logging.disable(logging.NOTSET)
|
||||
bs4.dammit.chardet_dammit = chardet
|
||||
|
||||
def test_unicode_in_unicode_out(self):
|
||||
# Unicode input is left alone. The original_encoding attribute
|
||||
# is not set.
|
||||
soup_from_unicode = self.soup(self.unicode_data)
|
||||
self.assertEqual(soup_from_unicode.decode(), self.unicode_data)
|
||||
self.assertEqual(soup_from_unicode.foo.string, u'Sacr\xe9 bleu!')
|
||||
self.assertEqual(soup_from_unicode.original_encoding, None)
|
||||
|
||||
def test_utf8_in_unicode_out(self):
|
||||
# UTF-8 input is converted to Unicode. The original_encoding
|
||||
# attribute is set.
|
||||
soup_from_utf8 = self.soup(self.utf8_data)
|
||||
self.assertEqual(soup_from_utf8.decode(), self.unicode_data)
|
||||
self.assertEqual(soup_from_utf8.foo.string, u'Sacr\xe9 bleu!')
|
||||
|
||||
def test_utf8_out(self):
|
||||
# The internal data structures can be encoded as UTF-8.
|
||||
soup_from_unicode = self.soup(self.unicode_data)
|
||||
self.assertEqual(soup_from_unicode.encode('utf-8'), self.utf8_data)
|
||||
|
||||
@skipIf(
|
||||
PYTHON_2_PRE_2_7 or PYTHON_3_PRE_3_2,
|
||||
"Bad HTMLParser detected; skipping test of non-ASCII characters in attribute name.")
|
||||
def test_attribute_name_containing_unicode_characters(self):
|
||||
markup = u'<div><a \N{SNOWMAN}="snowman"></a></div>'
|
||||
self.assertEqual(self.soup(markup).div.encode("utf8"), markup.encode("utf8"))
|
||||
|
||||
class TestUnicodeDammit(unittest.TestCase):
|
||||
"""Standalone tests of UnicodeDammit."""
|
||||
|
||||
def test_unicode_input(self):
|
||||
markup = u"I'm already Unicode! \N{SNOWMAN}"
|
||||
dammit = UnicodeDammit(markup)
|
||||
self.assertEqual(dammit.unicode_markup, markup)
|
||||
|
||||
def test_smart_quotes_to_unicode(self):
|
||||
markup = b"<foo>\x91\x92\x93\x94</foo>"
|
||||
dammit = UnicodeDammit(markup)
|
||||
self.assertEqual(
|
||||
dammit.unicode_markup, u"<foo>\u2018\u2019\u201c\u201d</foo>")
|
||||
|
||||
def test_smart_quotes_to_xml_entities(self):
|
||||
markup = b"<foo>\x91\x92\x93\x94</foo>"
|
||||
dammit = UnicodeDammit(markup, smart_quotes_to="xml")
|
||||
self.assertEqual(
|
||||
dammit.unicode_markup, "<foo>‘’“”</foo>")
|
||||
|
||||
def test_smart_quotes_to_html_entities(self):
|
||||
markup = b"<foo>\x91\x92\x93\x94</foo>"
|
||||
dammit = UnicodeDammit(markup, smart_quotes_to="html")
|
||||
self.assertEqual(
|
||||
dammit.unicode_markup, "<foo>‘’“”</foo>")
|
||||
|
||||
def test_smart_quotes_to_ascii(self):
|
||||
markup = b"<foo>\x91\x92\x93\x94</foo>"
|
||||
dammit = UnicodeDammit(markup, smart_quotes_to="ascii")
|
||||
self.assertEqual(
|
||||
dammit.unicode_markup, """<foo>''""</foo>""")
|
||||
|
||||
def test_detect_utf8(self):
|
||||
utf8 = b"\xc3\xa9"
|
||||
dammit = UnicodeDammit(utf8)
|
||||
self.assertEqual(dammit.unicode_markup, u'\xe9')
|
||||
self.assertEqual(dammit.original_encoding.lower(), 'utf-8')
|
||||
|
||||
def test_convert_hebrew(self):
|
||||
hebrew = b"\xed\xe5\xec\xf9"
|
||||
dammit = UnicodeDammit(hebrew, ["iso-8859-8"])
|
||||
self.assertEqual(dammit.original_encoding.lower(), 'iso-8859-8')
|
||||
self.assertEqual(dammit.unicode_markup, u'\u05dd\u05d5\u05dc\u05e9')
|
||||
|
||||
def test_dont_see_smart_quotes_where_there_are_none(self):
|
||||
utf_8 = b"\343\202\261\343\203\274\343\202\277\343\202\244 Watch"
|
||||
dammit = UnicodeDammit(utf_8)
|
||||
self.assertEqual(dammit.original_encoding.lower(), 'utf-8')
|
||||
self.assertEqual(dammit.unicode_markup.encode("utf-8"), utf_8)
|
||||
|
||||
def test_ignore_inappropriate_codecs(self):
|
||||
utf8_data = u"Räksmörgås".encode("utf-8")
|
||||
dammit = UnicodeDammit(utf8_data, ["iso-8859-8"])
|
||||
self.assertEqual(dammit.original_encoding.lower(), 'utf-8')
|
||||
|
||||
def test_ignore_invalid_codecs(self):
|
||||
utf8_data = u"Räksmörgås".encode("utf-8")
|
||||
for bad_encoding in ['.utf8', '...', 'utF---16.!']:
|
||||
dammit = UnicodeDammit(utf8_data, [bad_encoding])
|
||||
self.assertEqual(dammit.original_encoding.lower(), 'utf-8')
|
||||
|
||||
def test_detect_html5_style_meta_tag(self):
|
||||
|
||||
for data in (
|
||||
b'<html><meta charset="euc-jp" /></html>',
|
||||
b"<html><meta charset='euc-jp' /></html>",
|
||||
b"<html><meta charset=euc-jp /></html>",
|
||||
b"<html><meta charset=euc-jp/></html>"):
|
||||
dammit = UnicodeDammit(data, is_html=True)
|
||||
self.assertEqual(
|
||||
"euc-jp", dammit.original_encoding)
|
||||
|
||||
def test_last_ditch_entity_replacement(self):
|
||||
# This is a UTF-8 document that contains bytestrings
|
||||
# completely incompatible with UTF-8 (ie. encoded with some other
|
||||
# encoding).
|
||||
#
|
||||
# Since there is no consistent encoding for the document,
|
||||
# Unicode, Dammit will eventually encode the document as UTF-8
|
||||
# and encode the incompatible characters as REPLACEMENT
|
||||
# CHARACTER.
|
||||
#
|
||||
# If chardet is installed, it will detect that the document
|
||||
# can be converted into ISO-8859-1 without errors. This happens
|
||||
# to be the wrong encoding, but it is a consistent encoding, so the
|
||||
# code we're testing here won't run.
|
||||
#
|
||||
# So we temporarily disable chardet if it's present.
|
||||
doc = b"""\357\273\277<?xml version="1.0" encoding="UTF-8"?>
|
||||
<html><b>\330\250\330\252\330\261</b>
|
||||
<i>\310\322\321\220\312\321\355\344</i></html>"""
|
||||
chardet = bs4.dammit.chardet_dammit
|
||||
logging.disable(logging.WARNING)
|
||||
try:
|
||||
def noop(str):
|
||||
return None
|
||||
bs4.dammit.chardet_dammit = noop
|
||||
dammit = UnicodeDammit(doc)
|
||||
self.assertEqual(True, dammit.contains_replacement_characters)
|
||||
self.assertTrue(u"\ufffd" in dammit.unicode_markup)
|
||||
|
||||
soup = BeautifulSoup(doc, "html.parser")
|
||||
self.assertTrue(soup.contains_replacement_characters)
|
||||
finally:
|
||||
logging.disable(logging.NOTSET)
|
||||
bs4.dammit.chardet_dammit = chardet
|
||||
|
||||
def test_byte_order_mark_removed(self):
|
||||
# A document written in UTF-16LE will have its byte order marker stripped.
|
||||
data = b'\xff\xfe<\x00a\x00>\x00\xe1\x00\xe9\x00<\x00/\x00a\x00>\x00'
|
||||
dammit = UnicodeDammit(data)
|
||||
self.assertEqual(u"<a>áé</a>", dammit.unicode_markup)
|
||||
self.assertEqual("utf-16le", dammit.original_encoding)
|
||||
|
||||
def test_detwingle(self):
|
||||
# Here's a UTF8 document.
|
||||
utf8 = (u"\N{SNOWMAN}" * 3).encode("utf8")
|
||||
|
||||
# Here's a Windows-1252 document.
|
||||
windows_1252 = (
|
||||
u"\N{LEFT DOUBLE QUOTATION MARK}Hi, I like Windows!"
|
||||
u"\N{RIGHT DOUBLE QUOTATION MARK}").encode("windows_1252")
|
||||
|
||||
# Through some unholy alchemy, they've been stuck together.
|
||||
doc = utf8 + windows_1252 + utf8
|
||||
|
||||
# The document can't be turned into UTF-8:
|
||||
self.assertRaises(UnicodeDecodeError, doc.decode, "utf8")
|
||||
|
||||
# Unicode, Dammit thinks the whole document is Windows-1252,
|
||||
# and decodes it into "☃☃☃“Hi, I like Windows!”☃☃☃"
|
||||
|
||||
# But if we run it through fix_embedded_windows_1252, it's fixed:
|
||||
|
||||
fixed = UnicodeDammit.detwingle(doc)
|
||||
self.assertEqual(
|
||||
u"☃☃☃“Hi, I like Windows!”☃☃☃", fixed.decode("utf8"))
|
||||
|
||||
def test_detwingle_ignores_multibyte_characters(self):
|
||||
# Each of these characters has a UTF-8 representation ending
|
||||
# in \x93. \x93 is a smart quote if interpreted as
|
||||
# Windows-1252. But our code knows to skip over multibyte
|
||||
# UTF-8 characters, so they'll survive the process unscathed.
|
||||
for tricky_unicode_char in (
|
||||
u"\N{LATIN SMALL LIGATURE OE}", # 2-byte char '\xc5\x93'
|
||||
u"\N{LATIN SUBSCRIPT SMALL LETTER X}", # 3-byte char '\xe2\x82\x93'
|
||||
u"\xf0\x90\x90\x93", # This is a CJK character, not sure which one.
|
||||
):
|
||||
input = tricky_unicode_char.encode("utf8")
|
||||
self.assertTrue(input.endswith(b'\x93'))
|
||||
output = UnicodeDammit.detwingle(input)
|
||||
self.assertEqual(output, input)
|
||||
|
||||
class TestNamedspacedAttribute(SoupTest):
|
||||
|
||||
def test_name_may_be_none(self):
|
||||
a = NamespacedAttribute("xmlns", None)
|
||||
self.assertEqual(a, "xmlns")
|
||||
|
||||
def test_attribute_is_equivalent_to_colon_separated_string(self):
|
||||
a = NamespacedAttribute("a", "b")
|
||||
self.assertEqual("a:b", a)
|
||||
|
||||
def test_attributes_are_equivalent_if_prefix_and_name_identical(self):
|
||||
a = NamespacedAttribute("a", "b", "c")
|
||||
b = NamespacedAttribute("a", "b", "c")
|
||||
self.assertEqual(a, b)
|
||||
|
||||
# The actual namespace is not considered.
|
||||
c = NamespacedAttribute("a", "b", None)
|
||||
self.assertEqual(a, c)
|
||||
|
||||
# But name and prefix are important.
|
||||
d = NamespacedAttribute("a", "z", "c")
|
||||
self.assertNotEqual(a, d)
|
||||
|
||||
e = NamespacedAttribute("z", "b", "c")
|
||||
self.assertNotEqual(a, e)
|
||||
|
||||
|
||||
class TestAttributeValueWithCharsetSubstitution(unittest.TestCase):
|
||||
|
||||
def test_content_meta_attribute_value(self):
|
||||
value = CharsetMetaAttributeValue("euc-jp")
|
||||
self.assertEqual("euc-jp", value)
|
||||
self.assertEqual("euc-jp", value.original_value)
|
||||
self.assertEqual("utf8", value.encode("utf8"))
|
||||
|
||||
|
||||
def test_content_meta_attribute_value(self):
|
||||
value = ContentMetaAttributeValue("text/html; charset=euc-jp")
|
||||
self.assertEqual("text/html; charset=euc-jp", value)
|
||||
self.assertEqual("text/html; charset=euc-jp", value.original_value)
|
||||
self.assertEqual("text/html; charset=utf8", value.encode("utf8"))
|
|
@ -90,7 +90,7 @@ class FileCache(BaseCache):
|
|||
|
||||
def delete(self, key):
|
||||
name = self._fn(key)
|
||||
if not self.forever:
|
||||
if not self.forever and os.path.exists(name):
|
||||
os.remove(name)
|
||||
|
||||
|
||||
|
|
1
lib/configobj/_version.py
Normal file
|
@ -0,0 +1 @@
|
|||
__version__ = '5.1.0'
|
1471
lib/configobj/validate.py
Normal file
|
@ -1,10 +1,2 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Copyright (c) 2003-2010 Gustavo Niemeyer <gustavo@niemeyer.net>
|
||||
|
||||
This module offers extensions to the standard Python
|
||||
datetime module.
|
||||
"""
|
||||
__author__ = "Tomi Pieviläinen <tomi.pievilainen@iki.fi>"
|
||||
__license__ = "Simplified BSD"
|
||||
__version__ = "2.2"
|
||||
__version__ = "2.4.2"
|
||||
|
|
|
@ -1,18 +1,17 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Copyright (c) 2003-2007 Gustavo Niemeyer <gustavo@niemeyer.net>
|
||||
|
||||
This module offers extensions to the standard Python
|
||||
datetime module.
|
||||
This module offers a generic easter computing method for any given year, using
|
||||
Western, Orthodox or Julian algorithms.
|
||||
"""
|
||||
__license__ = "Simplified BSD"
|
||||
|
||||
import datetime
|
||||
|
||||
__all__ = ["easter", "EASTER_JULIAN", "EASTER_ORTHODOX", "EASTER_WESTERN"]
|
||||
|
||||
EASTER_JULIAN = 1
|
||||
EASTER_JULIAN = 1
|
||||
EASTER_ORTHODOX = 2
|
||||
EASTER_WESTERN = 3
|
||||
EASTER_WESTERN = 3
|
||||
|
||||
|
||||
def easter(year, method=EASTER_WESTERN):
|
||||
"""
|
||||
|
@ -24,7 +23,7 @@ def easter(year, method=EASTER_WESTERN):
|
|||
|
||||
This algorithm implements three different easter
|
||||
calculation methods:
|
||||
|
||||
|
||||
1 - Original calculation in Julian calendar, valid in
|
||||
dates after 326 AD
|
||||
2 - Original method, with date converted to Gregorian
|
||||
|
@ -39,7 +38,7 @@ def easter(year, method=EASTER_WESTERN):
|
|||
EASTER_WESTERN = 3
|
||||
|
||||
The default method is method 3.
|
||||
|
||||
|
||||
More about the algorithm may be found at:
|
||||
|
||||
http://users.chariot.net.au/~gmarts/eastalg.htm
|
||||
|
@ -68,24 +67,23 @@ def easter(year, method=EASTER_WESTERN):
|
|||
e = 0
|
||||
if method < 3:
|
||||
# Old method
|
||||
i = (19*g+15)%30
|
||||
j = (y+y//4+i)%7
|
||||
i = (19*g + 15) % 30
|
||||
j = (y + y//4 + i) % 7
|
||||
if method == 2:
|
||||
# Extra dates to convert Julian to Gregorian date
|
||||
e = 10
|
||||
if y > 1600:
|
||||
e = e+y//100-16-(y//100-16)//4
|
||||
e = e + y//100 - 16 - (y//100 - 16)//4
|
||||
else:
|
||||
# New method
|
||||
c = y//100
|
||||
h = (c-c//4-(8*c+13)//25+19*g+15)%30
|
||||
i = h-(h//28)*(1-(h//28)*(29//(h+1))*((21-g)//11))
|
||||
j = (y+y//4+i+2-c+c//4)%7
|
||||
h = (c - c//4 - (8*c + 13)//25 + 19*g + 15) % 30
|
||||
i = h - (h//28)*(1 - (h//28)*(29//(h + 1))*((21 - g)//11))
|
||||
j = (y + y//4 + i + 2 - c + c//4) % 7
|
||||
|
||||
# p can be from -6 to 56 corresponding to dates 22 March to 23 May
|
||||
# (later dates apply to method 2, although 23 May never actually occurs)
|
||||
p = i-j+e
|
||||
d = 1+(p+27+(p+6)//40)%31
|
||||
m = 3+(p+26)//30
|
||||
p = i - j + e
|
||||
d = 1 + (p + 27 + (p + 6)//40) % 31
|
||||
m = 3 + (p + 26)//30
|
||||
return datetime.date(int(y), int(m), int(d))
|
||||
|
||||
|
|
|
@ -1,11 +1,4 @@
|
|||
"""
|
||||
Copyright (c) 2003-2010 Gustavo Niemeyer <gustavo@niemeyer.net>
|
||||
|
||||
This module offers extensions to the standard Python
|
||||
datetime module.
|
||||
"""
|
||||
__license__ = "Simplified BSD"
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
import datetime
|
||||
import calendar
|
||||
|
||||
|
@ -13,6 +6,7 @@ from six import integer_types
|
|||
|
||||
__all__ = ["relativedelta", "MO", "TU", "WE", "TH", "FR", "SA", "SU"]
|
||||
|
||||
|
||||
class weekday(object):
|
||||
__slots__ = ["weekday", "n"]
|
||||
|
||||
|
@ -43,25 +37,35 @@ class weekday(object):
|
|||
|
||||
MO, TU, WE, TH, FR, SA, SU = weekdays = tuple([weekday(x) for x in range(7)])
|
||||
|
||||
|
||||
class relativedelta(object):
|
||||
"""
|
||||
The relativedelta type is based on the specification of the excelent
|
||||
work done by M.-A. Lemburg in his mx.DateTime extension. However,
|
||||
notice that this type does *NOT* implement the same algorithm as
|
||||
The relativedelta type is based on the specification of the excellent
|
||||
work done by M.-A. Lemburg in his
|
||||
`mx.DateTime <http://www.egenix.com/files/python/mxDateTime.html>`_ extension.
|
||||
However, notice that this type does *NOT* implement the same algorithm as
|
||||
his work. Do *NOT* expect it to behave like mx.DateTime's counterpart.
|
||||
|
||||
There's two different ways to build a relativedelta instance. The
|
||||
first one is passing it two date/datetime classes:
|
||||
There are two different ways to build a relativedelta instance. The
|
||||
first one is passing it two date/datetime classes::
|
||||
|
||||
relativedelta(datetime1, datetime2)
|
||||
|
||||
And the other way is to use the following keyword arguments:
|
||||
The second one is passing it any number of the following keyword arguments::
|
||||
|
||||
relativedelta(arg1=x,arg2=y,arg3=z...)
|
||||
|
||||
year, month, day, hour, minute, second, microsecond:
|
||||
Absolute information.
|
||||
Absolute information (argument is singular); adding or subtracting a
|
||||
relativedelta with absolute information does not perform an aritmetic
|
||||
operation, but rather REPLACES the corresponding value in the
|
||||
original datetime with the value(s) in relativedelta.
|
||||
|
||||
years, months, weeks, days, hours, minutes, seconds, microseconds:
|
||||
Relative information, may be negative.
|
||||
Relative information, may be negative (argument is plural); adding
|
||||
or subtracting a relativedelta with relative information performs
|
||||
the corresponding aritmetic operation on the original datetime value
|
||||
with the information in the relativedelta.
|
||||
|
||||
weekday:
|
||||
One of the weekday instances (MO, TU, etc). These instances may
|
||||
|
@ -80,26 +84,26 @@ And the other way is to use the following keyword arguments:
|
|||
|
||||
Here is the behavior of operations with relativedelta:
|
||||
|
||||
1) Calculate the absolute year, using the 'year' argument, or the
|
||||
1. Calculate the absolute year, using the 'year' argument, or the
|
||||
original datetime year, if the argument is not present.
|
||||
|
||||
2) Add the relative 'years' argument to the absolute year.
|
||||
2. Add the relative 'years' argument to the absolute year.
|
||||
|
||||
3) Do steps 1 and 2 for month/months.
|
||||
3. Do steps 1 and 2 for month/months.
|
||||
|
||||
4) Calculate the absolute day, using the 'day' argument, or the
|
||||
4. Calculate the absolute day, using the 'day' argument, or the
|
||||
original datetime day, if the argument is not present. Then,
|
||||
subtract from the day until it fits in the year and month
|
||||
found after their operations.
|
||||
|
||||
5) Add the relative 'days' argument to the absolute day. Notice
|
||||
5. Add the relative 'days' argument to the absolute day. Notice
|
||||
that the 'weeks' argument is multiplied by 7 and added to
|
||||
'days'.
|
||||
|
||||
6) Do steps 1 and 2 for hour/hours, minute/minutes, second/seconds,
|
||||
6. Do steps 1 and 2 for hour/hours, minute/minutes, second/seconds,
|
||||
microsecond/microseconds.
|
||||
|
||||
7) If the 'weekday' argument is present, calculate the weekday,
|
||||
7. If the 'weekday' argument is present, calculate the weekday,
|
||||
with the given (wday, nth) tuple. wday is the index of the
|
||||
weekday (0-6, 0=Mon), and nth is the number of weeks to add
|
||||
forward or backward, depending on its signal. Notice that if
|
||||
|
@ -114,9 +118,14 @@ Here is the behavior of operations with relativedelta:
|
|||
yearday=None, nlyearday=None,
|
||||
hour=None, minute=None, second=None, microsecond=None):
|
||||
if dt1 and dt2:
|
||||
if (not isinstance(dt1, datetime.date)) or (not isinstance(dt2, datetime.date)):
|
||||
# datetime is a subclass of date. So both must be date
|
||||
if not (isinstance(dt1, datetime.date) and
|
||||
isinstance(dt2, datetime.date)):
|
||||
raise TypeError("relativedelta only diffs datetime/date")
|
||||
if not type(dt1) == type(dt2): #isinstance(dt1, type(dt2)):
|
||||
# We allow two dates, or two datetimes, so we coerce them to be
|
||||
# of the same type
|
||||
if (isinstance(dt1, datetime.datetime) !=
|
||||
isinstance(dt2, datetime.datetime)):
|
||||
if not isinstance(dt1, datetime.datetime):
|
||||
dt1 = datetime.datetime.fromordinal(dt1.toordinal())
|
||||
elif not isinstance(dt2, datetime.datetime):
|
||||
|
@ -158,7 +167,7 @@ Here is the behavior of operations with relativedelta:
|
|||
else:
|
||||
self.years = years
|
||||
self.months = months
|
||||
self.days = days+weeks*7
|
||||
self.days = days + weeks * 7
|
||||
self.leapdays = leapdays
|
||||
self.hours = hours
|
||||
self.minutes = minutes
|
||||
|
@ -185,7 +194,8 @@ Here is the behavior of operations with relativedelta:
|
|||
if yearday > 59:
|
||||
self.leapdays = -1
|
||||
if yday:
|
||||
ydayidx = [31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 366]
|
||||
ydayidx = [31, 59, 90, 120, 151, 181, 212,
|
||||
243, 273, 304, 334, 366]
|
||||
for idx, ydays in enumerate(ydayidx):
|
||||
if yday <= ydays:
|
||||
self.month = idx+1
|
||||
|
@ -225,13 +235,20 @@ Here is the behavior of operations with relativedelta:
|
|||
div, mod = divmod(self.months*s, 12)
|
||||
self.months = mod*s
|
||||
self.years += div*s
|
||||
if (self.hours or self.minutes or self.seconds or self.microseconds or
|
||||
self.hour is not None or self.minute is not None or
|
||||
self.second is not None or self.microsecond is not None):
|
||||
if (self.hours or self.minutes or self.seconds or self.microseconds
|
||||
or self.hour is not None or self.minute is not None or
|
||||
self.second is not None or self.microsecond is not None):
|
||||
self._has_time = 1
|
||||
else:
|
||||
self._has_time = 0
|
||||
|
||||
@property
|
||||
def weeks(self):
|
||||
return self.days // 7
|
||||
@weeks.setter
|
||||
def weeks(self, value):
|
||||
self.days = self.days - (self.weeks * 7) + value*7
|
||||
|
||||
def _set_months(self, months):
|
||||
self.months = months
|
||||
if abs(self.months) > 11:
|
||||
|
@ -244,22 +261,24 @@ Here is the behavior of operations with relativedelta:
|
|||
|
||||
def __add__(self, other):
|
||||
if isinstance(other, relativedelta):
|
||||
return relativedelta(years=other.years+self.years,
|
||||
months=other.months+self.months,
|
||||
days=other.days+self.days,
|
||||
hours=other.hours+self.hours,
|
||||
minutes=other.minutes+self.minutes,
|
||||
seconds=other.seconds+self.seconds,
|
||||
microseconds=other.microseconds+self.microseconds,
|
||||
leapdays=other.leapdays or self.leapdays,
|
||||
year=other.year or self.year,
|
||||
month=other.month or self.month,
|
||||
day=other.day or self.day,
|
||||
weekday=other.weekday or self.weekday,
|
||||
hour=other.hour or self.hour,
|
||||
minute=other.minute or self.minute,
|
||||
second=other.second or self.second,
|
||||
microsecond=other.microsecond or self.microsecond)
|
||||
return self.__class__(years=other.years+self.years,
|
||||
months=other.months+self.months,
|
||||
days=other.days+self.days,
|
||||
hours=other.hours+self.hours,
|
||||
minutes=other.minutes+self.minutes,
|
||||
seconds=other.seconds+self.seconds,
|
||||
microseconds=(other.microseconds +
|
||||
self.microseconds),
|
||||
leapdays=other.leapdays or self.leapdays,
|
||||
year=other.year or self.year,
|
||||
month=other.month or self.month,
|
||||
day=other.day or self.day,
|
||||
weekday=other.weekday or self.weekday,
|
||||
hour=other.hour or self.hour,
|
||||
minute=other.minute or self.minute,
|
||||
second=other.second or self.second,
|
||||
microsecond=(other.microsecond or
|
||||
self.microsecond))
|
||||
if not isinstance(other, datetime.date):
|
||||
raise TypeError("unsupported type for add operation")
|
||||
elif self._has_time and not isinstance(other, datetime.datetime):
|
||||
|
@ -295,9 +314,9 @@ Here is the behavior of operations with relativedelta:
|
|||
weekday, nth = self.weekday.weekday, self.weekday.n or 1
|
||||
jumpdays = (abs(nth)-1)*7
|
||||
if nth > 0:
|
||||
jumpdays += (7-ret.weekday()+weekday)%7
|
||||
jumpdays += (7-ret.weekday()+weekday) % 7
|
||||
else:
|
||||
jumpdays += (ret.weekday()-weekday)%7
|
||||
jumpdays += (ret.weekday()-weekday) % 7
|
||||
jumpdays *= -1
|
||||
ret += datetime.timedelta(days=jumpdays)
|
||||
return ret
|
||||
|
@ -311,7 +330,7 @@ Here is the behavior of operations with relativedelta:
|
|||
def __sub__(self, other):
|
||||
if not isinstance(other, relativedelta):
|
||||
raise TypeError("unsupported type for sub operation")
|
||||
return relativedelta(years=self.years-other.years,
|
||||
return self.__class__(years=self.years-other.years,
|
||||
months=self.months-other.months,
|
||||
days=self.days-other.days,
|
||||
hours=self.hours-other.hours,
|
||||
|
@ -329,7 +348,7 @@ Here is the behavior of operations with relativedelta:
|
|||
microsecond=self.microsecond or other.microsecond)
|
||||
|
||||
def __neg__(self):
|
||||
return relativedelta(years=-self.years,
|
||||
return self.__class__(years=-self.years,
|
||||
months=-self.months,
|
||||
days=-self.days,
|
||||
hours=-self.hours,
|
||||
|
@ -363,10 +382,12 @@ Here is the behavior of operations with relativedelta:
|
|||
self.minute is None and
|
||||
self.second is None and
|
||||
self.microsecond is None)
|
||||
# Compatibility with Python 2.x
|
||||
__nonzero__ = __bool__
|
||||
|
||||
def __mul__(self, other):
|
||||
f = float(other)
|
||||
return relativedelta(years=int(self.years*f),
|
||||
return self.__class__(years=int(self.years*f),
|
||||
months=int(self.months*f),
|
||||
days=int(self.days*f),
|
||||
hours=int(self.hours*f),
|
||||
|
|
|
@ -1,19 +1,25 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Copyright (c) 2003-2007 Gustavo Niemeyer <gustavo@niemeyer.net>
|
||||
|
||||
This module offers extensions to the standard Python
|
||||
datetime module.
|
||||
This module offers timezone implementations subclassing the abstract
|
||||
:py:`datetime.tzinfo` type. There are classes to handle tzfile format files
|
||||
(usually are in :file:`/etc/localtime`, :file:`/usr/share/zoneinfo`, etc), TZ
|
||||
environment string (in all known formats), given ranges (with help from
|
||||
relative deltas), local machine timezone, fixed offset timezone, and UTC
|
||||
timezone.
|
||||
"""
|
||||
__license__ = "Simplified BSD"
|
||||
|
||||
from six import string_types, PY3
|
||||
|
||||
import datetime
|
||||
import struct
|
||||
import time
|
||||
import sys
|
||||
import os
|
||||
|
||||
from six import string_types, PY3
|
||||
|
||||
try:
|
||||
from dateutil.tzwin import tzwin, tzwinlocal
|
||||
except ImportError:
|
||||
tzwin = tzwinlocal = None
|
||||
|
||||
relativedelta = None
|
||||
parser = None
|
||||
rrule = None
|
||||
|
@ -21,32 +27,31 @@ rrule = None
|
|||
__all__ = ["tzutc", "tzoffset", "tzlocal", "tzfile", "tzrange",
|
||||
"tzstr", "tzical", "tzwin", "tzwinlocal", "gettz"]
|
||||
|
||||
try:
|
||||
from dateutil.tzwin import tzwin, tzwinlocal
|
||||
except (ImportError, OSError):
|
||||
tzwin, tzwinlocal = None, None
|
||||
|
||||
def tzname_in_python2(myfunc):
|
||||
def tzname_in_python2(namefunc):
|
||||
"""Change unicode output into bytestrings in Python 2
|
||||
|
||||
tzname() API changed in Python 3. It used to return bytes, but was changed
|
||||
to unicode strings
|
||||
"""
|
||||
def inner_func(*args, **kwargs):
|
||||
if PY3:
|
||||
return myfunc(*args, **kwargs)
|
||||
else:
|
||||
return myfunc(*args, **kwargs).encode()
|
||||
return inner_func
|
||||
def adjust_encoding(*args, **kwargs):
|
||||
name = namefunc(*args, **kwargs)
|
||||
if name is not None and not PY3:
|
||||
name = name.encode()
|
||||
|
||||
return name
|
||||
|
||||
return adjust_encoding
|
||||
|
||||
ZERO = datetime.timedelta(0)
|
||||
EPOCHORDINAL = datetime.datetime.utcfromtimestamp(0).toordinal()
|
||||
|
||||
|
||||
class tzutc(datetime.tzinfo):
|
||||
|
||||
def utcoffset(self, dt):
|
||||
return ZERO
|
||||
|
||||
|
||||
def dst(self, dt):
|
||||
return ZERO
|
||||
|
||||
|
@ -66,6 +71,7 @@ class tzutc(datetime.tzinfo):
|
|||
|
||||
__reduce__ = object.__reduce__
|
||||
|
||||
|
||||
class tzoffset(datetime.tzinfo):
|
||||
|
||||
def __init__(self, name, offset):
|
||||
|
@ -96,6 +102,7 @@ class tzoffset(datetime.tzinfo):
|
|||
|
||||
__reduce__ = object.__reduce__
|
||||
|
||||
|
||||
class tzlocal(datetime.tzinfo):
|
||||
|
||||
_std_offset = datetime.timedelta(seconds=-time.timezone)
|
||||
|
@ -123,25 +130,25 @@ class tzlocal(datetime.tzinfo):
|
|||
def _isdst(self, dt):
|
||||
# We can't use mktime here. It is unstable when deciding if
|
||||
# the hour near to a change is DST or not.
|
||||
#
|
||||
#
|
||||
# timestamp = time.mktime((dt.year, dt.month, dt.day, dt.hour,
|
||||
# dt.minute, dt.second, dt.weekday(), 0, -1))
|
||||
# return time.localtime(timestamp).tm_isdst
|
||||
#
|
||||
# The code above yields the following result:
|
||||
#
|
||||
#>>> import tz, datetime
|
||||
#>>> t = tz.tzlocal()
|
||||
#>>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname()
|
||||
#'BRDT'
|
||||
#>>> datetime.datetime(2003,2,16,0,tzinfo=t).tzname()
|
||||
#'BRST'
|
||||
#>>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname()
|
||||
#'BRST'
|
||||
#>>> datetime.datetime(2003,2,15,22,tzinfo=t).tzname()
|
||||
#'BRDT'
|
||||
#>>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname()
|
||||
#'BRDT'
|
||||
# >>> import tz, datetime
|
||||
# >>> t = tz.tzlocal()
|
||||
# >>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname()
|
||||
# 'BRDT'
|
||||
# >>> datetime.datetime(2003,2,16,0,tzinfo=t).tzname()
|
||||
# 'BRST'
|
||||
# >>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname()
|
||||
# 'BRST'
|
||||
# >>> datetime.datetime(2003,2,15,22,tzinfo=t).tzname()
|
||||
# 'BRDT'
|
||||
# >>> datetime.datetime(2003,2,15,23,tzinfo=t).tzname()
|
||||
# 'BRDT'
|
||||
#
|
||||
# Here is a more stable implementation:
|
||||
#
|
||||
|
@ -166,6 +173,7 @@ class tzlocal(datetime.tzinfo):
|
|||
|
||||
__reduce__ = object.__reduce__
|
||||
|
||||
|
||||
class _ttinfo(object):
|
||||
__slots__ = ["offset", "delta", "isdst", "abbr", "isstd", "isgmt"]
|
||||
|
||||
|
@ -205,15 +213,20 @@ class _ttinfo(object):
|
|||
if name in state:
|
||||
setattr(self, name, state[name])
|
||||
|
||||
|
||||
class tzfile(datetime.tzinfo):
|
||||
|
||||
# http://www.twinsun.com/tz/tz-link.htm
|
||||
# ftp://ftp.iana.org/tz/tz*.tar.gz
|
||||
|
||||
def __init__(self, fileobj):
|
||||
|
||||
def __init__(self, fileobj, filename=None):
|
||||
file_opened_here = False
|
||||
if isinstance(fileobj, string_types):
|
||||
self._filename = fileobj
|
||||
fileobj = open(fileobj, 'rb')
|
||||
file_opened_here = True
|
||||
elif filename is not None:
|
||||
self._filename = filename
|
||||
elif hasattr(fileobj, "name"):
|
||||
self._filename = fileobj.name
|
||||
else:
|
||||
|
@ -228,125 +241,128 @@ class tzfile(datetime.tzinfo):
|
|||
# six four-byte values of type long, written in a
|
||||
# ``standard'' byte order (the high-order byte
|
||||
# of the value is written first).
|
||||
try:
|
||||
if fileobj.read(4).decode() != "TZif":
|
||||
raise ValueError("magic not found")
|
||||
|
||||
if fileobj.read(4).decode() != "TZif":
|
||||
raise ValueError("magic not found")
|
||||
fileobj.read(16)
|
||||
|
||||
fileobj.read(16)
|
||||
(
|
||||
# The number of UTC/local indicators stored in the file.
|
||||
ttisgmtcnt,
|
||||
|
||||
(
|
||||
# The number of UTC/local indicators stored in the file.
|
||||
ttisgmtcnt,
|
||||
# The number of standard/wall indicators stored in the file.
|
||||
ttisstdcnt,
|
||||
|
||||
# The number of standard/wall indicators stored in the file.
|
||||
ttisstdcnt,
|
||||
|
||||
# The number of leap seconds for which data is
|
||||
# stored in the file.
|
||||
leapcnt,
|
||||
# The number of leap seconds for which data is
|
||||
# stored in the file.
|
||||
leapcnt,
|
||||
|
||||
# The number of "transition times" for which data
|
||||
# is stored in the file.
|
||||
timecnt,
|
||||
# The number of "transition times" for which data
|
||||
# is stored in the file.
|
||||
timecnt,
|
||||
|
||||
# The number of "local time types" for which data
|
||||
# is stored in the file (must not be zero).
|
||||
typecnt,
|
||||
# The number of "local time types" for which data
|
||||
# is stored in the file (must not be zero).
|
||||
typecnt,
|
||||
|
||||
# The number of characters of "time zone
|
||||
# abbreviation strings" stored in the file.
|
||||
charcnt,
|
||||
# The number of characters of "time zone
|
||||
# abbreviation strings" stored in the file.
|
||||
charcnt,
|
||||
|
||||
) = struct.unpack(">6l", fileobj.read(24))
|
||||
) = struct.unpack(">6l", fileobj.read(24))
|
||||
|
||||
# The above header is followed by tzh_timecnt four-byte
|
||||
# values of type long, sorted in ascending order.
|
||||
# These values are written in ``standard'' byte order.
|
||||
# Each is used as a transition time (as returned by
|
||||
# time(2)) at which the rules for computing local time
|
||||
# change.
|
||||
# The above header is followed by tzh_timecnt four-byte
|
||||
# values of type long, sorted in ascending order.
|
||||
# These values are written in ``standard'' byte order.
|
||||
# Each is used as a transition time (as returned by
|
||||
# time(2)) at which the rules for computing local time
|
||||
# change.
|
||||
|
||||
if timecnt:
|
||||
self._trans_list = struct.unpack(">%dl" % timecnt,
|
||||
fileobj.read(timecnt*4))
|
||||
else:
|
||||
self._trans_list = []
|
||||
if timecnt:
|
||||
self._trans_list = struct.unpack(">%dl" % timecnt,
|
||||
fileobj.read(timecnt*4))
|
||||
else:
|
||||
self._trans_list = []
|
||||
|
||||
# Next come tzh_timecnt one-byte values of type unsigned
|
||||
# char; each one tells which of the different types of
|
||||
# ``local time'' types described in the file is associated
|
||||
# with the same-indexed transition time. These values
|
||||
# serve as indices into an array of ttinfo structures that
|
||||
# appears next in the file.
|
||||
|
||||
if timecnt:
|
||||
self._trans_idx = struct.unpack(">%dB" % timecnt,
|
||||
fileobj.read(timecnt))
|
||||
else:
|
||||
self._trans_idx = []
|
||||
|
||||
# Each ttinfo structure is written as a four-byte value
|
||||
# for tt_gmtoff of type long, in a standard byte
|
||||
# order, followed by a one-byte value for tt_isdst
|
||||
# and a one-byte value for tt_abbrind. In each
|
||||
# structure, tt_gmtoff gives the number of
|
||||
# seconds to be added to UTC, tt_isdst tells whether
|
||||
# tm_isdst should be set by localtime(3), and
|
||||
# tt_abbrind serves as an index into the array of
|
||||
# time zone abbreviation characters that follow the
|
||||
# ttinfo structure(s) in the file.
|
||||
# Next come tzh_timecnt one-byte values of type unsigned
|
||||
# char; each one tells which of the different types of
|
||||
# ``local time'' types described in the file is associated
|
||||
# with the same-indexed transition time. These values
|
||||
# serve as indices into an array of ttinfo structures that
|
||||
# appears next in the file.
|
||||
|
||||
ttinfo = []
|
||||
if timecnt:
|
||||
self._trans_idx = struct.unpack(">%dB" % timecnt,
|
||||
fileobj.read(timecnt))
|
||||
else:
|
||||
self._trans_idx = []
|
||||
|
||||
for i in range(typecnt):
|
||||
ttinfo.append(struct.unpack(">lbb", fileobj.read(6)))
|
||||
# Each ttinfo structure is written as a four-byte value
|
||||
# for tt_gmtoff of type long, in a standard byte
|
||||
# order, followed by a one-byte value for tt_isdst
|
||||
# and a one-byte value for tt_abbrind. In each
|
||||
# structure, tt_gmtoff gives the number of
|
||||
# seconds to be added to UTC, tt_isdst tells whether
|
||||
# tm_isdst should be set by localtime(3), and
|
||||
# tt_abbrind serves as an index into the array of
|
||||
# time zone abbreviation characters that follow the
|
||||
# ttinfo structure(s) in the file.
|
||||
|
||||
abbr = fileobj.read(charcnt).decode()
|
||||
ttinfo = []
|
||||
|
||||
# Then there are tzh_leapcnt pairs of four-byte
|
||||
# values, written in standard byte order; the
|
||||
# first value of each pair gives the time (as
|
||||
# returned by time(2)) at which a leap second
|
||||
# occurs; the second gives the total number of
|
||||
# leap seconds to be applied after the given time.
|
||||
# The pairs of values are sorted in ascending order
|
||||
# by time.
|
||||
for i in range(typecnt):
|
||||
ttinfo.append(struct.unpack(">lbb", fileobj.read(6)))
|
||||
|
||||
# Not used, for now
|
||||
if leapcnt:
|
||||
leap = struct.unpack(">%dl" % (leapcnt*2),
|
||||
fileobj.read(leapcnt*8))
|
||||
abbr = fileobj.read(charcnt).decode()
|
||||
|
||||
# Then there are tzh_ttisstdcnt standard/wall
|
||||
# indicators, each stored as a one-byte value;
|
||||
# they tell whether the transition times associated
|
||||
# with local time types were specified as standard
|
||||
# time or wall clock time, and are used when
|
||||
# a time zone file is used in handling POSIX-style
|
||||
# time zone environment variables.
|
||||
# Then there are tzh_leapcnt pairs of four-byte
|
||||
# values, written in standard byte order; the
|
||||
# first value of each pair gives the time (as
|
||||
# returned by time(2)) at which a leap second
|
||||
# occurs; the second gives the total number of
|
||||
# leap seconds to be applied after the given time.
|
||||
# The pairs of values are sorted in ascending order
|
||||
# by time.
|
||||
|
||||
if ttisstdcnt:
|
||||
isstd = struct.unpack(">%db" % ttisstdcnt,
|
||||
fileobj.read(ttisstdcnt))
|
||||
# Not used, for now
|
||||
# if leapcnt:
|
||||
# leap = struct.unpack(">%dl" % (leapcnt*2),
|
||||
# fileobj.read(leapcnt*8))
|
||||
|
||||
# Finally, there are tzh_ttisgmtcnt UTC/local
|
||||
# indicators, each stored as a one-byte value;
|
||||
# they tell whether the transition times associated
|
||||
# with local time types were specified as UTC or
|
||||
# local time, and are used when a time zone file
|
||||
# is used in handling POSIX-style time zone envi-
|
||||
# ronment variables.
|
||||
# Then there are tzh_ttisstdcnt standard/wall
|
||||
# indicators, each stored as a one-byte value;
|
||||
# they tell whether the transition times associated
|
||||
# with local time types were specified as standard
|
||||
# time or wall clock time, and are used when
|
||||
# a time zone file is used in handling POSIX-style
|
||||
# time zone environment variables.
|
||||
|
||||
if ttisgmtcnt:
|
||||
isgmt = struct.unpack(">%db" % ttisgmtcnt,
|
||||
fileobj.read(ttisgmtcnt))
|
||||
if ttisstdcnt:
|
||||
isstd = struct.unpack(">%db" % ttisstdcnt,
|
||||
fileobj.read(ttisstdcnt))
|
||||
|
||||
# ** Everything has been read **
|
||||
# Finally, there are tzh_ttisgmtcnt UTC/local
|
||||
# indicators, each stored as a one-byte value;
|
||||
# they tell whether the transition times associated
|
||||
# with local time types were specified as UTC or
|
||||
# local time, and are used when a time zone file
|
||||
# is used in handling POSIX-style time zone envi-
|
||||
# ronment variables.
|
||||
|
||||
if ttisgmtcnt:
|
||||
isgmt = struct.unpack(">%db" % ttisgmtcnt,
|
||||
fileobj.read(ttisgmtcnt))
|
||||
|
||||
# ** Everything has been read **
|
||||
finally:
|
||||
if file_opened_here:
|
||||
fileobj.close()
|
||||
|
||||
# Build ttinfo list
|
||||
self._ttinfo_list = []
|
||||
for i in range(typecnt):
|
||||
gmtoff, isdst, abbrind = ttinfo[i]
|
||||
gmtoff, isdst, abbrind = ttinfo[i]
|
||||
# Round to full-minutes if that's not the case. Python's
|
||||
# datetime doesn't accept sub-minute timezones. Check
|
||||
# http://python.org/sf/1447945 for some information.
|
||||
|
@ -464,7 +480,7 @@ class tzfile(datetime.tzinfo):
|
|||
# However, this class stores historical changes in the
|
||||
# dst offset, so I belive that this wouldn't be the right
|
||||
# way to implement this.
|
||||
|
||||
|
||||
@tzname_in_python2
|
||||
def tzname(self, dt):
|
||||
if not self._ttinfo_std:
|
||||
|
@ -481,7 +497,6 @@ class tzfile(datetime.tzinfo):
|
|||
def __ne__(self, other):
|
||||
return not self.__eq__(other)
|
||||
|
||||
|
||||
def __repr__(self):
|
||||
return "%s(%s)" % (self.__class__.__name__, repr(self._filename))
|
||||
|
||||
|
@ -490,8 +505,8 @@ class tzfile(datetime.tzinfo):
|
|||
raise ValueError("Unpickable %s class" % self.__class__.__name__)
|
||||
return (self.__class__, (self._filename,))
|
||||
|
||||
class tzrange(datetime.tzinfo):
|
||||
|
||||
class tzrange(datetime.tzinfo):
|
||||
def __init__(self, stdabbr, stdoffset=None,
|
||||
dstabbr=None, dstoffset=None,
|
||||
start=None, end=None):
|
||||
|
@ -512,12 +527,12 @@ class tzrange(datetime.tzinfo):
|
|||
self._dst_offset = ZERO
|
||||
if dstabbr and start is None:
|
||||
self._start_delta = relativedelta.relativedelta(
|
||||
hours=+2, month=4, day=1, weekday=relativedelta.SU(+1))
|
||||
hours=+2, month=4, day=1, weekday=relativedelta.SU(+1))
|
||||
else:
|
||||
self._start_delta = start
|
||||
if dstabbr and end is None:
|
||||
self._end_delta = relativedelta.relativedelta(
|
||||
hours=+1, month=10, day=31, weekday=relativedelta.SU(-1))
|
||||
hours=+1, month=10, day=31, weekday=relativedelta.SU(-1))
|
||||
else:
|
||||
self._end_delta = end
|
||||
|
||||
|
@ -570,8 +585,9 @@ class tzrange(datetime.tzinfo):
|
|||
|
||||
__reduce__ = object.__reduce__
|
||||
|
||||
|
||||
class tzstr(tzrange):
|
||||
|
||||
|
||||
def __init__(self, s):
|
||||
global parser
|
||||
if not parser:
|
||||
|
@ -645,9 +661,10 @@ class tzstr(tzrange):
|
|||
def __repr__(self):
|
||||
return "%s(%s)" % (self.__class__.__name__, repr(self._s))
|
||||
|
||||
|
||||
class _tzicalvtzcomp(object):
|
||||
def __init__(self, tzoffsetfrom, tzoffsetto, isdst,
|
||||
tzname=None, rrule=None):
|
||||
tzname=None, rrule=None):
|
||||
self.tzoffsetfrom = datetime.timedelta(seconds=tzoffsetfrom)
|
||||
self.tzoffsetto = datetime.timedelta(seconds=tzoffsetto)
|
||||
self.tzoffsetdiff = self.tzoffsetto-self.tzoffsetfrom
|
||||
|
@ -655,6 +672,7 @@ class _tzicalvtzcomp(object):
|
|||
self.tzname = tzname
|
||||
self.rrule = rrule
|
||||
|
||||
|
||||
class _tzicalvtz(datetime.tzinfo):
|
||||
def __init__(self, tzid, comps=[]):
|
||||
self._tzid = tzid
|
||||
|
@ -718,6 +736,7 @@ class _tzicalvtz(datetime.tzinfo):
|
|||
|
||||
__reduce__ = object.__reduce__
|
||||
|
||||
|
||||
class tzical(object):
|
||||
def __init__(self, fileobj):
|
||||
global rrule
|
||||
|
@ -726,7 +745,8 @@ class tzical(object):
|
|||
|
||||
if isinstance(fileobj, string_types):
|
||||
self._s = fileobj
|
||||
fileobj = open(fileobj, 'r') # ical should be encoded in UTF-8 with CRLF
|
||||
# ical should be encoded in UTF-8 with CRLF
|
||||
fileobj = open(fileobj, 'r')
|
||||
elif hasattr(fileobj, "name"):
|
||||
self._s = fileobj.name
|
||||
else:
|
||||
|
@ -754,7 +774,7 @@ class tzical(object):
|
|||
if not s:
|
||||
raise ValueError("empty offset")
|
||||
if s[0] in ('+', '-'):
|
||||
signal = (-1, +1)[s[0]=='+']
|
||||
signal = (-1, +1)[s[0] == '+']
|
||||
s = s[1:]
|
||||
else:
|
||||
signal = +1
|
||||
|
@ -815,7 +835,8 @@ class tzical(object):
|
|||
if not tzid:
|
||||
raise ValueError("mandatory TZID not found")
|
||||
if not comps:
|
||||
raise ValueError("at least one component is needed")
|
||||
raise ValueError(
|
||||
"at least one component is needed")
|
||||
# Process vtimezone
|
||||
self._vtz[tzid] = _tzicalvtz(tzid, comps)
|
||||
invtz = False
|
||||
|
@ -823,9 +844,11 @@ class tzical(object):
|
|||
if not founddtstart:
|
||||
raise ValueError("mandatory DTSTART not found")
|
||||
if tzoffsetfrom is None:
|
||||
raise ValueError("mandatory TZOFFSETFROM not found")
|
||||
raise ValueError(
|
||||
"mandatory TZOFFSETFROM not found")
|
||||
if tzoffsetto is None:
|
||||
raise ValueError("mandatory TZOFFSETFROM not found")
|
||||
raise ValueError(
|
||||
"mandatory TZOFFSETFROM not found")
|
||||
# Process component
|
||||
rr = None
|
||||
if rrulelines:
|
||||
|
@ -848,15 +871,18 @@ class tzical(object):
|
|||
rrulelines.append(line)
|
||||
elif name == "TZOFFSETFROM":
|
||||
if parms:
|
||||
raise ValueError("unsupported %s parm: %s "%(name, parms[0]))
|
||||
raise ValueError(
|
||||
"unsupported %s parm: %s " % (name, parms[0]))
|
||||
tzoffsetfrom = self._parse_offset(value)
|
||||
elif name == "TZOFFSETTO":
|
||||
if parms:
|
||||
raise ValueError("unsupported TZOFFSETTO parm: "+parms[0])
|
||||
raise ValueError(
|
||||
"unsupported TZOFFSETTO parm: "+parms[0])
|
||||
tzoffsetto = self._parse_offset(value)
|
||||
elif name == "TZNAME":
|
||||
if parms:
|
||||
raise ValueError("unsupported TZNAME parm: "+parms[0])
|
||||
raise ValueError(
|
||||
"unsupported TZNAME parm: "+parms[0])
|
||||
tzname = value
|
||||
elif name == "COMMENT":
|
||||
pass
|
||||
|
@ -865,7 +891,8 @@ class tzical(object):
|
|||
else:
|
||||
if name == "TZID":
|
||||
if parms:
|
||||
raise ValueError("unsupported TZID parm: "+parms[0])
|
||||
raise ValueError(
|
||||
"unsupported TZID parm: "+parms[0])
|
||||
tzid = value
|
||||
elif name in ("TZURL", "LAST-MODIFIED", "COMMENT"):
|
||||
pass
|
||||
|
@ -886,6 +913,7 @@ else:
|
|||
TZFILES = []
|
||||
TZPATHS = []
|
||||
|
||||
|
||||
def gettz(name=None):
|
||||
tz = None
|
||||
if not name:
|
||||
|
@ -933,11 +961,11 @@ def gettz(name=None):
|
|||
pass
|
||||
else:
|
||||
tz = None
|
||||
if tzwin:
|
||||
if tzwin is not None:
|
||||
try:
|
||||
tz = tzwin(name)
|
||||
except OSError:
|
||||
pass
|
||||
except WindowsError:
|
||||
tz = None
|
||||
if not tz:
|
||||
from dateutil.zoneinfo import gettz
|
||||
tz = gettz(name)
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
# This code was originally contributed by Jeffrey Harris.
|
||||
import datetime
|
||||
import struct
|
||||
import winreg
|
||||
|
||||
from six.moves import winreg
|
||||
|
||||
__all__ = ["tzwin", "tzwinlocal"]
|
||||
|
||||
|
@ -12,8 +12,8 @@ TZKEYNAMENT = r"SOFTWARE\Microsoft\Windows NT\CurrentVersion\Time Zones"
|
|||
TZKEYNAME9X = r"SOFTWARE\Microsoft\Windows\CurrentVersion\Time Zones"
|
||||
TZLOCALKEYNAME = r"SYSTEM\CurrentControlSet\Control\TimeZoneInformation"
|
||||
|
||||
|
||||
def _settzkeyname():
|
||||
global TZKEYNAME
|
||||
handle = winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE)
|
||||
try:
|
||||
winreg.OpenKey(handle, TZKEYNAMENT).Close()
|
||||
|
@ -21,8 +21,10 @@ def _settzkeyname():
|
|||
except WindowsError:
|
||||
TZKEYNAME = TZKEYNAME9X
|
||||
handle.Close()
|
||||
return TZKEYNAME
|
||||
|
||||
TZKEYNAME = _settzkeyname()
|
||||
|
||||
_settzkeyname()
|
||||
|
||||
class tzwinbase(datetime.tzinfo):
|
||||
"""tzinfo class based on win32's timezones available in the registry."""
|
||||
|
@ -39,7 +41,7 @@ class tzwinbase(datetime.tzinfo):
|
|||
return datetime.timedelta(minutes=minutes)
|
||||
else:
|
||||
return datetime.timedelta(0)
|
||||
|
||||
|
||||
def tzname(self, dt):
|
||||
if self._isdst(dt):
|
||||
return self._dstname
|
||||
|
@ -59,8 +61,11 @@ class tzwinbase(datetime.tzinfo):
|
|||
|
||||
def display(self):
|
||||
return self._display
|
||||
|
||||
|
||||
def _isdst(self, dt):
|
||||
if not self._dstmonth:
|
||||
# dstmonth == 0 signals the zone has no daylight saving time
|
||||
return False
|
||||
dston = picknthweekday(dt.year, self._dstmonth, self._dstdayofweek,
|
||||
self._dsthour, self._dstminute,
|
||||
self._dstweeknumber)
|
||||
|
@ -78,31 +83,33 @@ class tzwin(tzwinbase):
|
|||
def __init__(self, name):
|
||||
self._name = name
|
||||
|
||||
handle = winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE)
|
||||
tzkey = winreg.OpenKey(handle, "%s\%s" % (TZKEYNAME, name))
|
||||
keydict = valuestodict(tzkey)
|
||||
tzkey.Close()
|
||||
handle.Close()
|
||||
# multiple contexts only possible in 2.7 and 3.1, we still support 2.6
|
||||
with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle:
|
||||
with winreg.OpenKey(handle,
|
||||
"%s\%s" % (TZKEYNAME, name)) as tzkey:
|
||||
keydict = valuestodict(tzkey)
|
||||
|
||||
self._stdname = keydict["Std"].encode("iso-8859-1")
|
||||
self._dstname = keydict["Dlt"].encode("iso-8859-1")
|
||||
|
||||
self._display = keydict["Display"]
|
||||
|
||||
|
||||
# See http://ww_winreg.jsiinc.com/SUBA/tip0300/rh0398.htm
|
||||
tup = struct.unpack("=3l16h", keydict["TZI"])
|
||||
self._stdoffset = -tup[0]-tup[1] # Bias + StandardBias * -1
|
||||
self._dstoffset = self._stdoffset-tup[2] # + DaylightBias * -1
|
||||
|
||||
self._stdoffset = -tup[0]-tup[1] # Bias + StandardBias * -1
|
||||
self._dstoffset = self._stdoffset-tup[2] # + DaylightBias * -1
|
||||
|
||||
# for the meaning see the win32 TIME_ZONE_INFORMATION structure docs
|
||||
# http://msdn.microsoft.com/en-us/library/windows/desktop/ms725481(v=vs.85).aspx
|
||||
(self._stdmonth,
|
||||
self._stddayofweek, # Sunday = 0
|
||||
self._stdweeknumber, # Last = 5
|
||||
self._stddayofweek, # Sunday = 0
|
||||
self._stdweeknumber, # Last = 5
|
||||
self._stdhour,
|
||||
self._stdminute) = tup[4:9]
|
||||
|
||||
(self._dstmonth,
|
||||
self._dstdayofweek, # Sunday = 0
|
||||
self._dstweeknumber, # Last = 5
|
||||
self._dstdayofweek, # Sunday = 0
|
||||
self._dstweeknumber, # Last = 5
|
||||
self._dsthour,
|
||||
self._dstminute) = tup[12:17]
|
||||
|
||||
|
@ -114,61 +121,59 @@ class tzwin(tzwinbase):
|
|||
|
||||
|
||||
class tzwinlocal(tzwinbase):
|
||||
|
||||
|
||||
def __init__(self):
|
||||
|
||||
handle = winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE)
|
||||
with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle:
|
||||
|
||||
tzlocalkey = winreg.OpenKey(handle, TZLOCALKEYNAME)
|
||||
keydict = valuestodict(tzlocalkey)
|
||||
tzlocalkey.Close()
|
||||
with winreg.OpenKey(handle, TZLOCALKEYNAME) as tzlocalkey:
|
||||
keydict = valuestodict(tzlocalkey)
|
||||
|
||||
self._stdname = keydict["StandardName"].encode("iso-8859-1")
|
||||
self._dstname = keydict["DaylightName"].encode("iso-8859-1")
|
||||
self._stdname = keydict["StandardName"].encode("iso-8859-1")
|
||||
self._dstname = keydict["DaylightName"].encode("iso-8859-1")
|
||||
|
||||
try:
|
||||
tzkey = winreg.OpenKey(handle, "%s\%s"%(TZKEYNAME, self._stdname))
|
||||
_keydict = valuestodict(tzkey)
|
||||
self._display = _keydict["Display"]
|
||||
tzkey.Close()
|
||||
except OSError:
|
||||
self._display = None
|
||||
try:
|
||||
with winreg.OpenKey(
|
||||
handle, "%s\%s" % (TZKEYNAME, self._stdname)) as tzkey:
|
||||
_keydict = valuestodict(tzkey)
|
||||
self._display = _keydict["Display"]
|
||||
except OSError:
|
||||
self._display = None
|
||||
|
||||
handle.Close()
|
||||
|
||||
self._stdoffset = -keydict["Bias"]-keydict["StandardBias"]
|
||||
self._dstoffset = self._stdoffset-keydict["DaylightBias"]
|
||||
|
||||
|
||||
# See http://ww_winreg.jsiinc.com/SUBA/tip0300/rh0398.htm
|
||||
tup = struct.unpack("=8h", keydict["StandardStart"])
|
||||
|
||||
(self._stdmonth,
|
||||
self._stddayofweek, # Sunday = 0
|
||||
self._stdweeknumber, # Last = 5
|
||||
self._stddayofweek, # Sunday = 0
|
||||
self._stdweeknumber, # Last = 5
|
||||
self._stdhour,
|
||||
self._stdminute) = tup[1:6]
|
||||
|
||||
tup = struct.unpack("=8h", keydict["DaylightStart"])
|
||||
|
||||
(self._dstmonth,
|
||||
self._dstdayofweek, # Sunday = 0
|
||||
self._dstweeknumber, # Last = 5
|
||||
self._dstdayofweek, # Sunday = 0
|
||||
self._dstweeknumber, # Last = 5
|
||||
self._dsthour,
|
||||
self._dstminute) = tup[1:6]
|
||||
|
||||
def __reduce__(self):
|
||||
return (self.__class__, ())
|
||||
|
||||
|
||||
def picknthweekday(year, month, dayofweek, hour, minute, whichweek):
|
||||
"""dayofweek == 0 means Sunday, whichweek 5 means last instance"""
|
||||
first = datetime.datetime(year, month, 1, hour, minute)
|
||||
weekdayone = first.replace(day=((dayofweek-first.isoweekday())%7+1))
|
||||
weekdayone = first.replace(day=((dayofweek-first.isoweekday()) % 7+1))
|
||||
for n in range(whichweek):
|
||||
dt = weekdayone+(whichweek-n)*ONEWEEK
|
||||
if dt.month == month:
|
||||
return dt
|
||||
|
||||
|
||||
def valuestodict(key):
|
||||
"""Convert a registry key's values to a dictionary."""
|
||||
dict = {}
|
||||
|
|
|
@ -1,109 +1,135 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""
|
||||
Copyright (c) 2003-2005 Gustavo Niemeyer <gustavo@niemeyer.net>
|
||||
|
||||
This module offers extensions to the standard Python
|
||||
datetime module.
|
||||
"""
|
||||
import logging
|
||||
import os
|
||||
from subprocess import call
|
||||
import warnings
|
||||
import tempfile
|
||||
import shutil
|
||||
import json
|
||||
|
||||
from subprocess import check_call
|
||||
from tarfile import TarFile
|
||||
from pkgutil import get_data
|
||||
from io import BytesIO
|
||||
from contextlib import closing
|
||||
|
||||
from dateutil.tz import tzfile
|
||||
|
||||
__author__ = "Tomi Pieviläinen <tomi.pievilainen@iki.fi>"
|
||||
__license__ = "Simplified BSD"
|
||||
__all__ = ["gettz", "gettz_db_metadata", "rebuild"]
|
||||
|
||||
__all__ = ["setcachesize", "gettz", "rebuild"]
|
||||
_ZONEFILENAME = "dateutil-zoneinfo.tar.gz"
|
||||
_METADATA_FN = 'METADATA'
|
||||
|
||||
# python2.6 compatability. Note that TarFile.__exit__ != TarFile.close, but
|
||||
# it's close enough for python2.6
|
||||
_tar_open = TarFile.open
|
||||
if not hasattr(TarFile, '__exit__'):
|
||||
def _tar_open(*args, **kwargs):
|
||||
return closing(TarFile.open(*args, **kwargs))
|
||||
|
||||
CACHE = []
|
||||
CACHESIZE = 10
|
||||
|
||||
class tzfile(tzfile):
|
||||
def __reduce__(self):
|
||||
return (gettz, (self._filename,))
|
||||
|
||||
def getzoneinfofile():
|
||||
filenames = sorted(os.listdir(os.path.join(os.path.dirname(__file__))))
|
||||
filenames.reverse()
|
||||
for entry in filenames:
|
||||
if entry.startswith("zoneinfo") and ".tar." in entry:
|
||||
return os.path.join(os.path.dirname(__file__), entry)
|
||||
return None
|
||||
|
||||
ZONEINFOFILE = getzoneinfofile()
|
||||
def getzoneinfofile_stream():
|
||||
try:
|
||||
return BytesIO(get_data(__name__, _ZONEFILENAME))
|
||||
except IOError as e: # TODO switch to FileNotFoundError?
|
||||
warnings.warn("I/O error({0}): {1}".format(e.errno, e.strerror))
|
||||
return None
|
||||
|
||||
del getzoneinfofile
|
||||
|
||||
def setcachesize(size):
|
||||
global CACHESIZE, CACHE
|
||||
CACHESIZE = size
|
||||
del CACHE[size:]
|
||||
class ZoneInfoFile(object):
|
||||
def __init__(self, zonefile_stream=None):
|
||||
if zonefile_stream is not None:
|
||||
with _tar_open(fileobj=zonefile_stream, mode='r') as tf:
|
||||
# dict comprehension does not work on python2.6
|
||||
# TODO: get back to the nicer syntax when we ditch python2.6
|
||||
# self.zones = {zf.name: tzfile(tf.extractfile(zf),
|
||||
# filename = zf.name)
|
||||
# for zf in tf.getmembers() if zf.isfile()}
|
||||
self.zones = dict((zf.name, tzfile(tf.extractfile(zf),
|
||||
filename=zf.name))
|
||||
for zf in tf.getmembers()
|
||||
if zf.isfile() and zf.name != _METADATA_FN)
|
||||
# deal with links: They'll point to their parent object. Less
|
||||
# waste of memory
|
||||
# links = {zl.name: self.zones[zl.linkname]
|
||||
# for zl in tf.getmembers() if zl.islnk() or zl.issym()}
|
||||
links = dict((zl.name, self.zones[zl.linkname])
|
||||
for zl in tf.getmembers() if
|
||||
zl.islnk() or zl.issym())
|
||||
self.zones.update(links)
|
||||
try:
|
||||
metadata_json = tf.extractfile(tf.getmember(_METADATA_FN))
|
||||
metadata_str = metadata_json.read().decode('UTF-8')
|
||||
self.metadata = json.loads(metadata_str)
|
||||
except KeyError:
|
||||
# no metadata in tar file
|
||||
self.metadata = None
|
||||
else:
|
||||
self.zones = dict()
|
||||
self.metadata = None
|
||||
|
||||
|
||||
# The current API has gettz as a module function, although in fact it taps into
|
||||
# a stateful class. So as a workaround for now, without changing the API, we
|
||||
# will create a new "global" class instance the first time a user requests a
|
||||
# timezone. Ugly, but adheres to the api.
|
||||
#
|
||||
# TODO: deprecate this.
|
||||
_CLASS_ZONE_INSTANCE = list()
|
||||
|
||||
|
||||
def gettz(name):
|
||||
tzinfo = None
|
||||
if ZONEINFOFILE:
|
||||
for cachedname, tzinfo in CACHE:
|
||||
if cachedname == name:
|
||||
break
|
||||
else:
|
||||
tf = TarFile.open(ZONEINFOFILE)
|
||||
try:
|
||||
zonefile = tf.extractfile(name)
|
||||
except KeyError:
|
||||
tzinfo = None
|
||||
else:
|
||||
tzinfo = tzfile(zonefile)
|
||||
tf.close()
|
||||
CACHE.insert(0, (name, tzinfo))
|
||||
del CACHE[CACHESIZE:]
|
||||
return tzinfo
|
||||
if len(_CLASS_ZONE_INSTANCE) == 0:
|
||||
_CLASS_ZONE_INSTANCE.append(ZoneInfoFile(getzoneinfofile_stream()))
|
||||
return _CLASS_ZONE_INSTANCE[0].zones.get(name)
|
||||
|
||||
def rebuild(filename, tag=None, format="gz"):
|
||||
|
||||
def gettz_db_metadata():
|
||||
""" Get the zonefile metadata
|
||||
|
||||
See `zonefile_metadata`_
|
||||
|
||||
:returns: A dictionary with the database metadata
|
||||
"""
|
||||
if len(_CLASS_ZONE_INSTANCE) == 0:
|
||||
_CLASS_ZONE_INSTANCE.append(ZoneInfoFile(getzoneinfofile_stream()))
|
||||
return _CLASS_ZONE_INSTANCE[0].metadata
|
||||
|
||||
|
||||
def rebuild(filename, tag=None, format="gz", zonegroups=[], metadata=None):
|
||||
"""Rebuild the internal timezone info in dateutil/zoneinfo/zoneinfo*tar*
|
||||
|
||||
filename is the timezone tarball from ftp.iana.org/tz.
|
||||
|
||||
"""
|
||||
import tempfile, shutil
|
||||
tmpdir = tempfile.mkdtemp()
|
||||
zonedir = os.path.join(tmpdir, "zoneinfo")
|
||||
moduledir = os.path.dirname(__file__)
|
||||
if tag: tag = "-"+tag
|
||||
targetname = "zoneinfo%s.tar.%s" % (tag, format)
|
||||
try:
|
||||
tf = TarFile.open(filename)
|
||||
# The "backwards" zone file contains links to other files, so must be
|
||||
# processed as last
|
||||
for name in sorted(tf.getnames(),
|
||||
key=lambda k: k != "backward" and k or "z"):
|
||||
if not (name.endswith(".sh") or
|
||||
name.endswith(".tab") or
|
||||
name == "leapseconds"):
|
||||
with _tar_open(filename) as tf:
|
||||
for name in zonegroups:
|
||||
tf.extract(name, tmpdir)
|
||||
filepath = os.path.join(tmpdir, name)
|
||||
try:
|
||||
# zic will return errors for nontz files in the package
|
||||
# such as the Makefile or README, so check_call cannot
|
||||
# be used (or at least extra checks would be needed)
|
||||
call(["zic", "-d", zonedir, filepath])
|
||||
except OSError as e:
|
||||
if e.errno == 2:
|
||||
logging.error(
|
||||
"Could not find zic. Perhaps you need to install "
|
||||
"libc-bin or some other package that provides it, "
|
||||
"or it's not in your PATH?")
|
||||
filepaths = [os.path.join(tmpdir, n) for n in zonegroups]
|
||||
try:
|
||||
check_call(["zic", "-d", zonedir] + filepaths)
|
||||
except OSError as e:
|
||||
if e.errno == 2:
|
||||
logging.error(
|
||||
"Could not find zic. Perhaps you need to install "
|
||||
"libc-bin or some other package that provides it, "
|
||||
"or it's not in your PATH?")
|
||||
raise
|
||||
tf.close()
|
||||
target = os.path.join(moduledir, targetname)
|
||||
for entry in os.listdir(moduledir):
|
||||
if entry.startswith("zoneinfo") and ".tar." in entry:
|
||||
os.unlink(os.path.join(moduledir, entry))
|
||||
tf = TarFile.open(target, "w:%s" % format)
|
||||
for entry in os.listdir(zonedir):
|
||||
entrypath = os.path.join(zonedir, entry)
|
||||
tf.add(entrypath, entry)
|
||||
tf.close()
|
||||
# write metadata file
|
||||
with open(os.path.join(zonedir, _METADATA_FN), 'w') as f:
|
||||
json.dump(metadata, f, indent=4, sort_keys=True)
|
||||
target = os.path.join(moduledir, _ZONEFILENAME)
|
||||
with _tar_open(target, "w:%s" % format) as tf:
|
||||
for entry in os.listdir(zonedir):
|
||||
entrypath = os.path.join(zonedir, entry)
|
||||
tf.add(entrypath, entry)
|
||||
finally:
|
||||
shutil.rmtree(tmpdir)
|
||||
|
|
|
@ -1,44 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# Copyright 2007 Doug Hellmann.
|
||||
#
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose and without fee is hereby
|
||||
# granted, provided that the above copyright notice appear in all
|
||||
# copies and that both that copyright notice and this permission
|
||||
# notice appear in supporting documentation, and that the name of Doug
|
||||
# Hellmann not be used in advertising or publicity pertaining to
|
||||
# distribution of the software without specific, written prior
|
||||
# permission.
|
||||
#
|
||||
# DOUG HELLMANN DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
|
||||
# INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN
|
||||
# NO EVENT SHALL DOUG HELLMANN BE LIABLE FOR ANY SPECIAL, INDIRECT OR
|
||||
# CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
|
||||
# OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
|
||||
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
#
|
||||
|
||||
"""
|
||||
|
||||
"""
|
||||
|
||||
__module_id__ = "$Id$"
|
||||
|
||||
#
|
||||
# Import system modules
|
||||
#
|
||||
|
||||
|
||||
#
|
||||
# Import local modules
|
||||
#
|
||||
from cache import Cache
|
||||
|
||||
#
|
||||
# Module
|
||||
#
|
|
@ -1,204 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# Copyright 2007 Doug Hellmann.
|
||||
#
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose and without fee is hereby
|
||||
# granted, provided that the above copyright notice appear in all
|
||||
# copies and that both that copyright notice and this permission
|
||||
# notice appear in supporting documentation, and that the name of Doug
|
||||
# Hellmann not be used in advertising or publicity pertaining to
|
||||
# distribution of the software without specific, written prior
|
||||
# permission.
|
||||
#
|
||||
# DOUG HELLMANN DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
|
||||
# INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN
|
||||
# NO EVENT SHALL DOUG HELLMANN BE LIABLE FOR ANY SPECIAL, INDIRECT OR
|
||||
# CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
|
||||
# OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
|
||||
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
#
|
||||
|
||||
"""
|
||||
|
||||
"""
|
||||
|
||||
__module_id__ = "$Id$"
|
||||
|
||||
#
|
||||
# Import system modules
|
||||
#
|
||||
from feedparser import feedparser
|
||||
|
||||
import logging
|
||||
import time
|
||||
|
||||
#
|
||||
# Import local modules
|
||||
#
|
||||
|
||||
|
||||
#
|
||||
# Module
|
||||
#
|
||||
|
||||
logger = logging.getLogger('feedcache.cache')
|
||||
|
||||
|
||||
class Cache:
|
||||
"""A class to wrap Mark Pilgrim's Universal Feed Parser module
|
||||
(http://www.feedparser.org) so that parameters can be used to
|
||||
cache the feed results locally instead of fetching the feed every
|
||||
time it is requested. Uses both etag and modified times for
|
||||
caching.
|
||||
"""
|
||||
|
||||
def __init__(self, storage, timeToLiveSeconds=300, userAgent='feedcache'):
|
||||
"""
|
||||
Arguments:
|
||||
|
||||
storage -- Backing store for the cache. It should follow
|
||||
the dictionary API, with URLs used as keys. It should
|
||||
persist data.
|
||||
|
||||
timeToLiveSeconds=300 -- The length of time content should
|
||||
live in the cache before an update is attempted.
|
||||
|
||||
userAgent='feedcache' -- User agent string to be used when
|
||||
fetching feed contents.
|
||||
|
||||
"""
|
||||
self.storage = storage
|
||||
self.time_to_live = timeToLiveSeconds
|
||||
self.user_agent = userAgent
|
||||
return
|
||||
|
||||
def purge(self, olderThanSeconds):
|
||||
"""Remove cached data from the storage if the data is older than the
|
||||
date given. If olderThanSeconds is None, the entire cache is purged.
|
||||
"""
|
||||
if olderThanSeconds is None:
|
||||
logger.debug('purging the entire cache')
|
||||
for key in self.storage.keys():
|
||||
del self.storage[key]
|
||||
else:
|
||||
now = time.time()
|
||||
# Iterate over the keys and load each item one at a time
|
||||
# to avoid having the entire cache loaded into memory
|
||||
# at one time.
|
||||
for url in self.storage.keys():
|
||||
(cached_time, cached_data) = self.storage[url]
|
||||
age = now - cached_time
|
||||
if age >= olderThanSeconds:
|
||||
logger.debug('removing %s with age %d', url, age)
|
||||
del self.storage[url]
|
||||
return
|
||||
|
||||
def fetch(self, url, force_update=False, offline=False, request_headers=None):
|
||||
"""Return the feed at url.
|
||||
|
||||
url - The URL of the feed.
|
||||
|
||||
force_update=False - When True, update the cache whether the
|
||||
current contents have
|
||||
exceeded their time-to-live
|
||||
or not.
|
||||
|
||||
offline=False - When True, only return data from the local
|
||||
cache and never access the remote
|
||||
URL.
|
||||
|
||||
If there is data for that feed in the cache already, check
|
||||
the expiration date before accessing the server. If the
|
||||
cached data has not expired, return it without accessing the
|
||||
server.
|
||||
|
||||
In cases where the server is accessed, check for updates
|
||||
before deciding what to return. If the server reports a
|
||||
status of 304, the previously cached content is returned.
|
||||
|
||||
The cache is only updated if the server returns a status of
|
||||
200, to avoid holding redirected data in the cache.
|
||||
"""
|
||||
logger.debug('url="%s"' % url)
|
||||
|
||||
# Convert the URL to a value we can use
|
||||
# as a key for the storage backend.
|
||||
key = url
|
||||
if isinstance(key, unicode):
|
||||
key = key.encode('utf-8')
|
||||
|
||||
modified = None
|
||||
etag = None
|
||||
now = time.time()
|
||||
|
||||
cached_time, cached_content = self.storage.get(key, (None, None))
|
||||
|
||||
# Offline mode support (no networked requests)
|
||||
# so return whatever we found in the storage.
|
||||
# If there is nothing in the storage, we'll be returning None.
|
||||
if offline:
|
||||
logger.debug('offline mode')
|
||||
return cached_content
|
||||
|
||||
# Does the storage contain a version of the data
|
||||
# which is older than the time-to-live?
|
||||
logger.debug('cache modified time: %s' % str(cached_time))
|
||||
if cached_time is not None and not force_update:
|
||||
if self.time_to_live:
|
||||
age = now - cached_time
|
||||
if age <= self.time_to_live:
|
||||
logger.debug('cache contents still valid')
|
||||
return cached_content
|
||||
else:
|
||||
logger.debug('cache contents older than TTL')
|
||||
else:
|
||||
logger.debug('no TTL value')
|
||||
|
||||
# The cache is out of date, but we have
|
||||
# something. Try to use the etag and modified_time
|
||||
# values from the cached content.
|
||||
etag = cached_content.get('etag')
|
||||
modified = cached_content.get('modified')
|
||||
logger.debug('cached etag=%s' % etag)
|
||||
logger.debug('cached modified=%s' % str(modified))
|
||||
else:
|
||||
logger.debug('nothing in the cache, or forcing update')
|
||||
|
||||
# We know we need to fetch, so go ahead and do it.
|
||||
logger.debug('fetching...')
|
||||
parsed_result = feedparser.parse(url,
|
||||
agent=self.user_agent,
|
||||
modified=modified,
|
||||
etag=etag,
|
||||
request_headers=request_headers)
|
||||
|
||||
status = parsed_result.get('status', None)
|
||||
logger.debug('HTTP status=%s' % status)
|
||||
if status == 304:
|
||||
# No new data, based on the etag or modified values.
|
||||
# We need to update the modified time in the
|
||||
# storage, though, so we know that what we have
|
||||
# stored is up to date.
|
||||
self.storage[key] = (now, cached_content)
|
||||
|
||||
# Return the data from the cache, since
|
||||
# the parsed data will be empty.
|
||||
parsed_result = cached_content
|
||||
elif status == 200:
|
||||
# There is new content, so store it unless there was an error.
|
||||
error = parsed_result.get('bozo_exception')
|
||||
if not error:
|
||||
logger.debug('Updating stored data for %s' % url)
|
||||
self.storage[key] = (now, parsed_result)
|
||||
else:
|
||||
logger.warning('Not storing data with exception: %s',
|
||||
error)
|
||||
else:
|
||||
logger.warning('Not updating cache with HTTP status %s', status)
|
||||
|
||||
return parsed_result
|
|
@ -1,69 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# Copyright 2007 Doug Hellmann.
|
||||
#
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose and without fee is hereby
|
||||
# granted, provided that the above copyright notice appear in all
|
||||
# copies and that both that copyright notice and this permission
|
||||
# notice appear in supporting documentation, and that the name of Doug
|
||||
# Hellmann not be used in advertising or publicity pertaining to
|
||||
# distribution of the software without specific, written prior
|
||||
# permission.
|
||||
#
|
||||
# DOUG HELLMANN DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
|
||||
# INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN
|
||||
# NO EVENT SHALL DOUG HELLMANN BE LIABLE FOR ANY SPECIAL, INDIRECT OR
|
||||
# CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
|
||||
# OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
|
||||
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
#
|
||||
from __future__ import with_statement
|
||||
|
||||
"""Lock wrapper for cache storage which do not permit multi-threaded access.
|
||||
|
||||
"""
|
||||
|
||||
__module_id__ = "$Id$"
|
||||
|
||||
#
|
||||
# Import system modules
|
||||
#
|
||||
import threading
|
||||
|
||||
#
|
||||
# Import local modules
|
||||
#
|
||||
|
||||
|
||||
#
|
||||
# Module
|
||||
#
|
||||
|
||||
class CacheStorageLock:
|
||||
"""Lock wrapper for cache storage which do not permit multi-threaded access.
|
||||
"""
|
||||
|
||||
def __init__(self, shelf):
|
||||
self.lock = threading.Lock()
|
||||
self.shelf = shelf
|
||||
return
|
||||
|
||||
def __getitem__(self, key):
|
||||
with self.lock:
|
||||
return self.shelf[key]
|
||||
|
||||
def get(self, key, default=None):
|
||||
with self.lock:
|
||||
try:
|
||||
return self.shelf[key]
|
||||
except KeyError:
|
||||
return default
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
with self.lock:
|
||||
self.shelf[key] = value
|
|
@ -1,63 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# Copyright 2007 Doug Hellmann.
|
||||
#
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose and without fee is hereby
|
||||
# granted, provided that the above copyright notice appear in all
|
||||
# copies and that both that copyright notice and this permission
|
||||
# notice appear in supporting documentation, and that the name of Doug
|
||||
# Hellmann not be used in advertising or publicity pertaining to
|
||||
# distribution of the software without specific, written prior
|
||||
# permission.
|
||||
#
|
||||
# DOUG HELLMANN DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
|
||||
# INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN
|
||||
# NO EVENT SHALL DOUG HELLMANN BE LIABLE FOR ANY SPECIAL, INDIRECT OR
|
||||
# CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
|
||||
# OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
|
||||
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
#
|
||||
|
||||
"""Example use of feedcache.Cache.
|
||||
|
||||
"""
|
||||
|
||||
__module_id__ = "$Id$"
|
||||
|
||||
#
|
||||
# Import system modules
|
||||
#
|
||||
import sys
|
||||
import shelve
|
||||
|
||||
#
|
||||
# Import local modules
|
||||
#
|
||||
import cache
|
||||
|
||||
#
|
||||
# Module
|
||||
#
|
||||
|
||||
def main(urls=[]):
|
||||
print 'Saving feed data to ./.feedcache'
|
||||
storage = shelve.open('.feedcache')
|
||||
try:
|
||||
fc = cache.Cache(storage)
|
||||
for url in urls:
|
||||
parsed_data = fc.fetch(url)
|
||||
print parsed_data.feed.title
|
||||
for entry in parsed_data.entries:
|
||||
print '\t', entry.title
|
||||
finally:
|
||||
storage.close()
|
||||
return
|
||||
|
||||
if __name__ == '__main__':
|
||||
main(sys.argv[1:])
|
||||
|
|
@ -1,144 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# Copyright 2007 Doug Hellmann.
|
||||
#
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose and without fee is hereby
|
||||
# granted, provided that the above copyright notice appear in all
|
||||
# copies and that both that copyright notice and this permission
|
||||
# notice appear in supporting documentation, and that the name of Doug
|
||||
# Hellmann not be used in advertising or publicity pertaining to
|
||||
# distribution of the software without specific, written prior
|
||||
# permission.
|
||||
#
|
||||
# DOUG HELLMANN DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
|
||||
# INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN
|
||||
# NO EVENT SHALL DOUG HELLMANN BE LIABLE FOR ANY SPECIAL, INDIRECT OR
|
||||
# CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
|
||||
# OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
|
||||
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
#
|
||||
|
||||
"""Example use of feedcache.Cache combined with threads.
|
||||
|
||||
"""
|
||||
|
||||
__module_id__ = "$Id$"
|
||||
|
||||
#
|
||||
# Import system modules
|
||||
#
|
||||
import Queue
|
||||
import sys
|
||||
import shove
|
||||
import threading
|
||||
|
||||
#
|
||||
# Import local modules
|
||||
#
|
||||
import cache
|
||||
|
||||
#
|
||||
# Module
|
||||
#
|
||||
|
||||
MAX_THREADS=5
|
||||
OUTPUT_DIR='/tmp/feedcache_example'
|
||||
|
||||
|
||||
def main(urls=[]):
|
||||
|
||||
if not urls:
|
||||
print 'Specify the URLs to a few RSS or Atom feeds on the command line.'
|
||||
return
|
||||
|
||||
# Decide how many threads to start
|
||||
num_threads = min(len(urls), MAX_THREADS)
|
||||
|
||||
# Add the URLs to a queue
|
||||
url_queue = Queue.Queue()
|
||||
for url in urls:
|
||||
url_queue.put(url)
|
||||
|
||||
# Add poison pills to the url queue to cause
|
||||
# the worker threads to break out of their loops
|
||||
for i in range(num_threads):
|
||||
url_queue.put(None)
|
||||
|
||||
# Track the entries in the feeds being fetched
|
||||
entry_queue = Queue.Queue()
|
||||
|
||||
print 'Saving feed data to', OUTPUT_DIR
|
||||
storage = shove.Shove('file://' + OUTPUT_DIR)
|
||||
try:
|
||||
|
||||
# Start a few worker threads
|
||||
worker_threads = []
|
||||
for i in range(num_threads):
|
||||
t = threading.Thread(target=fetch_urls,
|
||||
args=(storage, url_queue, entry_queue,))
|
||||
worker_threads.append(t)
|
||||
t.setDaemon(True)
|
||||
t.start()
|
||||
|
||||
# Start a thread to print the results
|
||||
printer_thread = threading.Thread(target=print_entries, args=(entry_queue,))
|
||||
printer_thread.setDaemon(True)
|
||||
printer_thread.start()
|
||||
|
||||
# Wait for all of the URLs to be processed
|
||||
url_queue.join()
|
||||
|
||||
# Wait for the worker threads to finish
|
||||
for t in worker_threads:
|
||||
t.join()
|
||||
|
||||
# Poison the print thread and wait for it to exit
|
||||
entry_queue.put((None,None))
|
||||
entry_queue.join()
|
||||
printer_thread.join()
|
||||
|
||||
finally:
|
||||
storage.close()
|
||||
return
|
||||
|
||||
|
||||
def fetch_urls(storage, input_queue, output_queue):
|
||||
"""Thread target for fetching feed data.
|
||||
"""
|
||||
c = cache.Cache(storage)
|
||||
|
||||
while True:
|
||||
next_url = input_queue.get()
|
||||
if next_url is None: # None causes thread to exit
|
||||
input_queue.task_done()
|
||||
break
|
||||
|
||||
feed_data = c.fetch(next_url)
|
||||
for entry in feed_data.entries:
|
||||
output_queue.put( (feed_data.feed, entry) )
|
||||
input_queue.task_done()
|
||||
return
|
||||
|
||||
|
||||
def print_entries(input_queue):
|
||||
"""Thread target for printing the contents of the feeds.
|
||||
"""
|
||||
while True:
|
||||
feed, entry = input_queue.get()
|
||||
if feed is None: # None causes thread to exist
|
||||
input_queue.task_done()
|
||||
break
|
||||
|
||||
print '%s: %s' % (feed.title, entry.title)
|
||||
input_queue.task_done()
|
||||
return
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main(sys.argv[1:])
|
||||
|
|
@ -1,323 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# Copyright 2007 Doug Hellmann.
|
||||
#
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose and without fee is hereby
|
||||
# granted, provided that the above copyright notice appear in all
|
||||
# copies and that both that copyright notice and this permission
|
||||
# notice appear in supporting documentation, and that the name of Doug
|
||||
# Hellmann not be used in advertising or publicity pertaining to
|
||||
# distribution of the software without specific, written prior
|
||||
# permission.
|
||||
#
|
||||
# DOUG HELLMANN DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
|
||||
# INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN
|
||||
# NO EVENT SHALL DOUG HELLMANN BE LIABLE FOR ANY SPECIAL, INDIRECT OR
|
||||
# CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
|
||||
# OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
|
||||
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
#
|
||||
|
||||
"""Unittests for feedcache.cache
|
||||
|
||||
"""
|
||||
|
||||
__module_id__ = "$Id$"
|
||||
|
||||
import logging
|
||||
logging.basicConfig(level=logging.DEBUG,
|
||||
format='%(asctime)s %(levelname)-8s %(name)s %(message)s',
|
||||
)
|
||||
logger = logging.getLogger('feedcache.test_cache')
|
||||
|
||||
#
|
||||
# Import system modules
|
||||
#
|
||||
import copy
|
||||
import time
|
||||
import unittest
|
||||
import UserDict
|
||||
|
||||
#
|
||||
# Import local modules
|
||||
#
|
||||
import cache
|
||||
from test_server import HTTPTestBase, TestHTTPServer
|
||||
|
||||
#
|
||||
# Module
|
||||
#
|
||||
|
||||
|
||||
class CacheTestBase(HTTPTestBase):
|
||||
|
||||
CACHE_TTL = 30
|
||||
|
||||
def setUp(self):
|
||||
HTTPTestBase.setUp(self)
|
||||
|
||||
self.storage = self.getStorage()
|
||||
self.cache = cache.Cache(self.storage,
|
||||
timeToLiveSeconds=self.CACHE_TTL,
|
||||
userAgent='feedcache.test',
|
||||
)
|
||||
return
|
||||
|
||||
def getStorage(self):
|
||||
"Return a cache storage for the test."
|
||||
return {}
|
||||
|
||||
|
||||
class CacheTest(CacheTestBase):
|
||||
|
||||
CACHE_TTL = 30
|
||||
|
||||
def getServer(self):
|
||||
"These tests do not want to use the ETag or If-Modified-Since headers"
|
||||
return TestHTTPServer(applyModifiedHeaders=False)
|
||||
|
||||
def testRetrieveNotInCache(self):
|
||||
# Retrieve data not already in the cache.
|
||||
feed_data = self.cache.fetch(self.TEST_URL)
|
||||
self.failUnless(feed_data)
|
||||
self.failUnlessEqual(feed_data.feed.title, 'CacheTest test data')
|
||||
return
|
||||
|
||||
def testRetrieveIsInCache(self):
|
||||
# Retrieve data which is alread in the cache,
|
||||
# and verify that the second copy is identitical
|
||||
# to the first.
|
||||
|
||||
# First fetch
|
||||
feed_data = self.cache.fetch(self.TEST_URL)
|
||||
|
||||
# Second fetch
|
||||
feed_data2 = self.cache.fetch(self.TEST_URL)
|
||||
|
||||
# Since it is the in-memory storage, we should have the
|
||||
# exact same object.
|
||||
self.failUnless(feed_data is feed_data2)
|
||||
return
|
||||
|
||||
def testExpireDataInCache(self):
|
||||
# Retrieve data which is in the cache but which
|
||||
# has expired and verify that the second copy
|
||||
# is different from the first.
|
||||
|
||||
# First fetch
|
||||
feed_data = self.cache.fetch(self.TEST_URL)
|
||||
|
||||
# Change the timeout and sleep to move the clock
|
||||
self.cache.time_to_live = 0
|
||||
time.sleep(1)
|
||||
|
||||
# Second fetch
|
||||
feed_data2 = self.cache.fetch(self.TEST_URL)
|
||||
|
||||
# Since we reparsed, the cache response should be different.
|
||||
self.failIf(feed_data is feed_data2)
|
||||
return
|
||||
|
||||
def testForceUpdate(self):
|
||||
# Force cache to retrieve data which is alread in the cache,
|
||||
# and verify that the new data is different.
|
||||
|
||||
# Pre-populate the storage with bad data
|
||||
self.cache.storage[self.TEST_URL] = (time.time() + 100, self.id())
|
||||
|
||||
# Fetch the data
|
||||
feed_data = self.cache.fetch(self.TEST_URL, force_update=True)
|
||||
|
||||
self.failIfEqual(feed_data, self.id())
|
||||
return
|
||||
|
||||
def testOfflineMode(self):
|
||||
# Retrieve data which is alread in the cache,
|
||||
# whether it is expired or not.
|
||||
|
||||
# Pre-populate the storage with data
|
||||
self.cache.storage[self.TEST_URL] = (0, self.id())
|
||||
|
||||
# Fetch it
|
||||
feed_data = self.cache.fetch(self.TEST_URL, offline=True)
|
||||
|
||||
self.failUnlessEqual(feed_data, self.id())
|
||||
return
|
||||
|
||||
def testUnicodeURL(self):
|
||||
# Pass in a URL which is unicode
|
||||
|
||||
url = unicode(self.TEST_URL)
|
||||
feed_data = self.cache.fetch(url)
|
||||
|
||||
storage = self.cache.storage
|
||||
key = unicode(self.TEST_URL).encode('UTF-8')
|
||||
|
||||
# Verify that the storage has a key
|
||||
self.failUnless(key in storage)
|
||||
|
||||
# Now pull the data from the storage directly
|
||||
storage_timeout, storage_data = self.cache.storage.get(key)
|
||||
self.failUnlessEqual(feed_data, storage_data)
|
||||
return
|
||||
|
||||
|
||||
class SingleWriteMemoryStorage(UserDict.UserDict):
|
||||
"""Cache storage which only allows the cache value
|
||||
for a URL to be updated one time.
|
||||
"""
|
||||
|
||||
def __setitem__(self, url, data):
|
||||
if url in self.keys():
|
||||
modified, existing = self[url]
|
||||
# Allow the modified time to change,
|
||||
# but not the feed content.
|
||||
if data[1] != existing:
|
||||
raise AssertionError('Trying to update cache for %s to %s' \
|
||||
% (url, data))
|
||||
UserDict.UserDict.__setitem__(self, url, data)
|
||||
return
|
||||
|
||||
|
||||
class CacheConditionalGETTest(CacheTestBase):
|
||||
|
||||
CACHE_TTL = 0
|
||||
|
||||
def getStorage(self):
|
||||
return SingleWriteMemoryStorage()
|
||||
|
||||
def testFetchOnceForEtag(self):
|
||||
# Fetch data which has a valid ETag value, and verify
|
||||
# that while we hit the server twice the response
|
||||
# codes cause us to use the same data.
|
||||
|
||||
# First fetch populates the cache
|
||||
response1 = self.cache.fetch(self.TEST_URL)
|
||||
self.failUnlessEqual(response1.feed.title, 'CacheTest test data')
|
||||
|
||||
# Remove the modified setting from the cache so we know
|
||||
# the next time we check the etag will be used
|
||||
# to check for updates. Since we are using an in-memory
|
||||
# cache, modifying response1 updates the cache storage
|
||||
# directly.
|
||||
response1['modified'] = None
|
||||
|
||||
# This should result in a 304 status, and no data from
|
||||
# the server. That means the cache won't try to
|
||||
# update the storage, so our SingleWriteMemoryStorage
|
||||
# should not raise and we should have the same
|
||||
# response object.
|
||||
response2 = self.cache.fetch(self.TEST_URL)
|
||||
self.failUnless(response1 is response2)
|
||||
|
||||
# Should have hit the server twice
|
||||
self.failUnlessEqual(self.server.getNumRequests(), 2)
|
||||
return
|
||||
|
||||
def testFetchOnceForModifiedTime(self):
|
||||
# Fetch data which has a valid Last-Modified value, and verify
|
||||
# that while we hit the server twice the response
|
||||
# codes cause us to use the same data.
|
||||
|
||||
# First fetch populates the cache
|
||||
response1 = self.cache.fetch(self.TEST_URL)
|
||||
self.failUnlessEqual(response1.feed.title, 'CacheTest test data')
|
||||
|
||||
# Remove the etag setting from the cache so we know
|
||||
# the next time we check the modified time will be used
|
||||
# to check for updates. Since we are using an in-memory
|
||||
# cache, modifying response1 updates the cache storage
|
||||
# directly.
|
||||
response1['etag'] = None
|
||||
|
||||
# This should result in a 304 status, and no data from
|
||||
# the server. That means the cache won't try to
|
||||
# update the storage, so our SingleWriteMemoryStorage
|
||||
# should not raise and we should have the same
|
||||
# response object.
|
||||
response2 = self.cache.fetch(self.TEST_URL)
|
||||
self.failUnless(response1 is response2)
|
||||
|
||||
# Should have hit the server twice
|
||||
self.failUnlessEqual(self.server.getNumRequests(), 2)
|
||||
return
|
||||
|
||||
|
||||
class CacheRedirectHandlingTest(CacheTestBase):
|
||||
|
||||
def _test(self, response):
|
||||
# Set up the server to redirect requests,
|
||||
# then verify that the cache is not updated
|
||||
# for the original or new URL and that the
|
||||
# redirect status is fed back to us with
|
||||
# the fetched data.
|
||||
|
||||
self.server.setResponse(response, '/redirected')
|
||||
|
||||
response1 = self.cache.fetch(self.TEST_URL)
|
||||
|
||||
# The response should include the status code we set
|
||||
self.failUnlessEqual(response1.get('status'), response)
|
||||
|
||||
# The response should include the new URL, too
|
||||
self.failUnlessEqual(response1.href, self.TEST_URL + 'redirected')
|
||||
|
||||
# The response should not have been cached under either URL
|
||||
self.failIf(self.TEST_URL in self.storage)
|
||||
self.failIf(self.TEST_URL + 'redirected' in self.storage)
|
||||
return
|
||||
|
||||
def test301(self):
|
||||
self._test(301)
|
||||
|
||||
def test302(self):
|
||||
self._test(302)
|
||||
|
||||
def test303(self):
|
||||
self._test(303)
|
||||
|
||||
def test307(self):
|
||||
self._test(307)
|
||||
|
||||
|
||||
class CachePurgeTest(CacheTestBase):
|
||||
|
||||
def testPurgeAll(self):
|
||||
# Remove everything from the cache
|
||||
|
||||
self.cache.fetch(self.TEST_URL)
|
||||
self.failUnless(self.storage.keys(),
|
||||
'Have no data in the cache storage')
|
||||
|
||||
self.cache.purge(None)
|
||||
|
||||
self.failIf(self.storage.keys(),
|
||||
'Still have data in the cache storage')
|
||||
return
|
||||
|
||||
def testPurgeByAge(self):
|
||||
# Remove old content from the cache
|
||||
|
||||
self.cache.fetch(self.TEST_URL)
|
||||
self.failUnless(self.storage.keys(),
|
||||
'have no data in the cache storage')
|
||||
|
||||
time.sleep(1)
|
||||
|
||||
remains = (time.time(), copy.deepcopy(self.storage[self.TEST_URL][1]))
|
||||
self.storage['http://this.should.remain/'] = remains
|
||||
|
||||
self.cache.purge(1)
|
||||
|
||||
self.failUnlessEqual(self.storage.keys(),
|
||||
['http://this.should.remain/'])
|
||||
return
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
|
@ -1,90 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# Copyright 2007 Doug Hellmann.
|
||||
#
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose and without fee is hereby
|
||||
# granted, provided that the above copyright notice appear in all
|
||||
# copies and that both that copyright notice and this permission
|
||||
# notice appear in supporting documentation, and that the name of Doug
|
||||
# Hellmann not be used in advertising or publicity pertaining to
|
||||
# distribution of the software without specific, written prior
|
||||
# permission.
|
||||
#
|
||||
# DOUG HELLMANN DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
|
||||
# INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN
|
||||
# NO EVENT SHALL DOUG HELLMANN BE LIABLE FOR ANY SPECIAL, INDIRECT OR
|
||||
# CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
|
||||
# OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
|
||||
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
#
|
||||
|
||||
"""Tests for shelflock.
|
||||
|
||||
"""
|
||||
|
||||
__module_id__ = "$Id$"
|
||||
|
||||
#
|
||||
# Import system modules
|
||||
#
|
||||
import os
|
||||
import shelve
|
||||
import tempfile
|
||||
import threading
|
||||
import unittest
|
||||
|
||||
#
|
||||
# Import local modules
|
||||
#
|
||||
from cache import Cache
|
||||
from cachestoragelock import CacheStorageLock
|
||||
from test_server import HTTPTestBase
|
||||
|
||||
#
|
||||
# Module
|
||||
#
|
||||
|
||||
class CacheShelveTest(HTTPTestBase):
|
||||
|
||||
def setUp(self):
|
||||
HTTPTestBase.setUp(self)
|
||||
handle, self.shelve_filename = tempfile.mkstemp('.shelve')
|
||||
os.close(handle) # we just want the file name, so close the open handle
|
||||
os.unlink(self.shelve_filename) # remove the empty file
|
||||
return
|
||||
|
||||
def tearDown(self):
|
||||
try:
|
||||
os.unlink(self.shelve_filename)
|
||||
except AttributeError:
|
||||
pass
|
||||
HTTPTestBase.tearDown(self)
|
||||
return
|
||||
|
||||
def test(self):
|
||||
storage = shelve.open(self.shelve_filename)
|
||||
locking_storage = CacheStorageLock(storage)
|
||||
try:
|
||||
fc = Cache(locking_storage)
|
||||
|
||||
# First fetch the data through the cache
|
||||
parsed_data = fc.fetch(self.TEST_URL)
|
||||
self.failUnlessEqual(parsed_data.feed.title, 'CacheTest test data')
|
||||
|
||||
# Now retrieve the same data directly from the shelf
|
||||
modified, shelved_data = storage[self.TEST_URL]
|
||||
|
||||
# The data should be the same
|
||||
self.failUnlessEqual(parsed_data, shelved_data)
|
||||
finally:
|
||||
storage.close()
|
||||
return
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
|
@ -1,241 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# Copyright 2007 Doug Hellmann.
|
||||
#
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose and without fee is hereby
|
||||
# granted, provided that the above copyright notice appear in all
|
||||
# copies and that both that copyright notice and this permission
|
||||
# notice appear in supporting documentation, and that the name of Doug
|
||||
# Hellmann not be used in advertising or publicity pertaining to
|
||||
# distribution of the software without specific, written prior
|
||||
# permission.
|
||||
#
|
||||
# DOUG HELLMANN DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
|
||||
# INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN
|
||||
# NO EVENT SHALL DOUG HELLMANN BE LIABLE FOR ANY SPECIAL, INDIRECT OR
|
||||
# CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
|
||||
# OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
|
||||
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
#
|
||||
|
||||
"""Simple HTTP server for testing the feed cache.
|
||||
|
||||
"""
|
||||
|
||||
__module_id__ = "$Id$"
|
||||
|
||||
#
|
||||
# Import system modules
|
||||
#
|
||||
import BaseHTTPServer
|
||||
import logging
|
||||
import md5
|
||||
import threading
|
||||
import time
|
||||
import unittest
|
||||
import urllib
|
||||
|
||||
#
|
||||
# Import local modules
|
||||
#
|
||||
|
||||
|
||||
#
|
||||
# Module
|
||||
#
|
||||
logger = logging.getLogger('feedcache.test_server')
|
||||
|
||||
|
||||
def make_etag(data):
|
||||
"""Given a string containing data to be returned to the client,
|
||||
compute an ETag value for the data.
|
||||
"""
|
||||
_md5 = md5.new()
|
||||
_md5.update(data)
|
||||
return _md5.hexdigest()
|
||||
|
||||
|
||||
class TestHTTPHandler(BaseHTTPServer.BaseHTTPRequestHandler):
|
||||
"HTTP request handler which serves the same feed data every time."
|
||||
|
||||
FEED_DATA = """<?xml version="1.0" encoding="utf-8"?>
|
||||
|
||||
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-us">
|
||||
<title>CacheTest test data</title>
|
||||
<link href="http://localhost/feedcache/" rel="alternate"></link>
|
||||
<link href="http://localhost/feedcache/atom/" rel="self"></link>
|
||||
<id>http://localhost/feedcache/</id>
|
||||
<updated>2006-10-14T11:00:36Z</updated>
|
||||
<entry>
|
||||
<title>single test entry</title>
|
||||
<link href="http://www.example.com/" rel="alternate"></link>
|
||||
<updated>2006-10-14T11:00:36Z</updated>
|
||||
<author>
|
||||
<name>author goes here</name>
|
||||
<email>authoremail@example.com</email>
|
||||
</author>
|
||||
<id>http://www.example.com/</id>
|
||||
<summary type="html">description goes here</summary>
|
||||
<link length="100" href="http://www.example.com/enclosure" type="text/html" rel="enclosure">
|
||||
</link>
|
||||
</entry>
|
||||
</feed>"""
|
||||
|
||||
# The data does not change, so save the ETag and modified times
|
||||
# as class attributes.
|
||||
ETAG = make_etag(FEED_DATA)
|
||||
# Calculated using email.utils.formatdate(usegmt=True)
|
||||
MODIFIED_TIME = 'Sun, 08 Apr 2012 20:16:48 GMT'
|
||||
|
||||
def do_GET(self):
|
||||
"Handle GET requests."
|
||||
logger.debug('GET %s', self.path)
|
||||
|
||||
if self.path == '/shutdown':
|
||||
# Shortcut to handle stopping the server
|
||||
logger.debug('Stopping server')
|
||||
self.server.stop()
|
||||
self.send_response(200)
|
||||
|
||||
else:
|
||||
# Record the request for tests that count them
|
||||
self.server.requests.append(self.path)
|
||||
# Process the request
|
||||
logger.debug('pre-defined response code: %d', self.server.response)
|
||||
handler_method_name = 'do_GET_%d' % self.server.response
|
||||
handler_method = getattr(self, handler_method_name)
|
||||
handler_method()
|
||||
return
|
||||
|
||||
def do_GET_3xx(self):
|
||||
"Handle redirects"
|
||||
if self.path.endswith('/redirected'):
|
||||
logger.debug('already redirected')
|
||||
# We have already redirected, so return the data.
|
||||
return self.do_GET_200()
|
||||
new_path = self.server.new_path
|
||||
logger.debug('redirecting to %s', new_path)
|
||||
self.send_response(self.server.response)
|
||||
self.send_header('Location', new_path)
|
||||
return
|
||||
|
||||
do_GET_301 = do_GET_3xx
|
||||
do_GET_302 = do_GET_3xx
|
||||
do_GET_303 = do_GET_3xx
|
||||
do_GET_307 = do_GET_3xx
|
||||
|
||||
def do_GET_200(self):
|
||||
logger.debug('Etag: %s' % self.ETAG)
|
||||
logger.debug('Last-Modified: %s' % self.MODIFIED_TIME)
|
||||
|
||||
incoming_etag = self.headers.get('If-None-Match', None)
|
||||
logger.debug('Incoming ETag: "%s"' % incoming_etag)
|
||||
|
||||
incoming_modified = self.headers.get('If-Modified-Since', None)
|
||||
logger.debug('Incoming If-Modified-Since: %s' % incoming_modified)
|
||||
|
||||
send_data = True
|
||||
|
||||
# Does the client have the same version of the data we have?
|
||||
if self.server.apply_modified_headers:
|
||||
if incoming_etag == self.ETAG:
|
||||
logger.debug('Response 304, etag')
|
||||
self.send_response(304)
|
||||
send_data = False
|
||||
|
||||
elif incoming_modified == self.MODIFIED_TIME:
|
||||
logger.debug('Response 304, modified time')
|
||||
self.send_response(304)
|
||||
send_data = False
|
||||
|
||||
# Now optionally send the data, if the client needs it
|
||||
if send_data:
|
||||
logger.debug('Response 200')
|
||||
self.send_response(200)
|
||||
|
||||
self.send_header('Content-Type', 'application/atom+xml')
|
||||
|
||||
logger.debug('Outgoing Etag: %s' % self.ETAG)
|
||||
self.send_header('ETag', self.ETAG)
|
||||
|
||||
logger.debug('Outgoing modified time: %s' % self.MODIFIED_TIME)
|
||||
self.send_header('Last-Modified', self.MODIFIED_TIME)
|
||||
|
||||
self.end_headers()
|
||||
|
||||
logger.debug('Sending data')
|
||||
self.wfile.write(self.FEED_DATA)
|
||||
return
|
||||
|
||||
|
||||
class TestHTTPServer(BaseHTTPServer.HTTPServer):
|
||||
"""HTTP Server which counts the number of requests made
|
||||
and can stop based on client instructions.
|
||||
"""
|
||||
|
||||
def __init__(self, applyModifiedHeaders=True, handler=TestHTTPHandler):
|
||||
self.apply_modified_headers = applyModifiedHeaders
|
||||
self.keep_serving = True
|
||||
self.requests = []
|
||||
self.setResponse(200)
|
||||
BaseHTTPServer.HTTPServer.__init__(self, ('', 9999), handler)
|
||||
return
|
||||
|
||||
def setResponse(self, newResponse, newPath=None):
|
||||
"""Sets the response code to use for future requests, and a new
|
||||
path to be used as a redirect target, if necessary.
|
||||
"""
|
||||
self.response = newResponse
|
||||
self.new_path = newPath
|
||||
return
|
||||
|
||||
def getNumRequests(self):
|
||||
"Return the number of requests which have been made on the server."
|
||||
return len(self.requests)
|
||||
|
||||
def stop(self):
|
||||
"Stop serving requests, after the next request."
|
||||
self.keep_serving = False
|
||||
return
|
||||
|
||||
def serve_forever(self):
|
||||
"Main loop for server"
|
||||
while self.keep_serving:
|
||||
self.handle_request()
|
||||
logger.debug('exiting')
|
||||
return
|
||||
|
||||
|
||||
class HTTPTestBase(unittest.TestCase):
|
||||
"Base class for tests that use a TestHTTPServer"
|
||||
|
||||
TEST_URL = 'http://localhost:9999/'
|
||||
|
||||
CACHE_TTL = 0
|
||||
|
||||
def setUp(self):
|
||||
self.server = self.getServer()
|
||||
self.server_thread = threading.Thread(target=self.server.serve_forever)
|
||||
# set daemon flag so the tests don't hang if cleanup fails
|
||||
self.server_thread.setDaemon(True)
|
||||
self.server_thread.start()
|
||||
return
|
||||
|
||||
def getServer(self):
|
||||
"Return a web server for the test."
|
||||
s = TestHTTPServer()
|
||||
s.setResponse(200)
|
||||
return s
|
||||
|
||||
def tearDown(self):
|
||||
# Stop the server thread
|
||||
urllib.urlretrieve(self.TEST_URL + 'shutdown')
|
||||
time.sleep(1)
|
||||
self.server.server_close()
|
||||
self.server_thread.join()
|
||||
return
|
|
@ -1,89 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
#
|
||||
# Copyright 2007 Doug Hellmann.
|
||||
#
|
||||
#
|
||||
# All Rights Reserved
|
||||
#
|
||||
# Permission to use, copy, modify, and distribute this software and
|
||||
# its documentation for any purpose and without fee is hereby
|
||||
# granted, provided that the above copyright notice appear in all
|
||||
# copies and that both that copyright notice and this permission
|
||||
# notice appear in supporting documentation, and that the name of Doug
|
||||
# Hellmann not be used in advertising or publicity pertaining to
|
||||
# distribution of the software without specific, written prior
|
||||
# permission.
|
||||
#
|
||||
# DOUG HELLMANN DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
|
||||
# INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN
|
||||
# NO EVENT SHALL DOUG HELLMANN BE LIABLE FOR ANY SPECIAL, INDIRECT OR
|
||||
# CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
|
||||
# OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
|
||||
# NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
# CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
#
|
||||
|
||||
"""Tests with shove filesystem storage.
|
||||
|
||||
"""
|
||||
|
||||
__module_id__ = "$Id$"
|
||||
|
||||
#
|
||||
# Import system modules
|
||||
#
|
||||
import os
|
||||
import shove
|
||||
import tempfile
|
||||
import threading
|
||||
import unittest
|
||||
|
||||
#
|
||||
# Import local modules
|
||||
#
|
||||
from cache import Cache
|
||||
from test_server import HTTPTestBase
|
||||
|
||||
#
|
||||
# Module
|
||||
#
|
||||
|
||||
class CacheShoveTest(HTTPTestBase):
|
||||
|
||||
def setUp(self):
|
||||
HTTPTestBase.setUp(self)
|
||||
self.shove_dirname = tempfile.mkdtemp('shove')
|
||||
return
|
||||
|
||||
def tearDown(self):
|
||||
try:
|
||||
os.system('rm -rf %s' % self.storage_dirname)
|
||||
except AttributeError:
|
||||
pass
|
||||
HTTPTestBase.tearDown(self)
|
||||
return
|
||||
|
||||
def test(self):
|
||||
# First fetch the data through the cache
|
||||
storage = shove.Shove('file://' + self.shove_dirname)
|
||||
try:
|
||||
fc = Cache(storage)
|
||||
parsed_data = fc.fetch(self.TEST_URL)
|
||||
self.failUnlessEqual(parsed_data.feed.title, 'CacheTest test data')
|
||||
finally:
|
||||
storage.close()
|
||||
|
||||
# Now retrieve the same data directly from the shelf
|
||||
storage = shove.Shove('file://' + self.shove_dirname)
|
||||
try:
|
||||
modified, shelved_data = storage[self.TEST_URL]
|
||||
finally:
|
||||
storage.close()
|
||||
|
||||
# The data should be the same
|
||||
self.failUnlessEqual(parsed_data, shelved_data)
|
||||
return
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
|
@ -1,30 +0,0 @@
|
|||
Metadata-Version: 1.1
|
||||
Name: feedparser
|
||||
Version: 5.1.3
|
||||
Summary: Universal feed parser, handles RSS 0.9x, RSS 1.0, RSS 2.0, CDF, Atom 0.3, and Atom 1.0 feeds
|
||||
Home-page: http://code.google.com/p/feedparser/
|
||||
Author: Kurt McKee
|
||||
Author-email: contactme@kurtmckee.org
|
||||
License: UNKNOWN
|
||||
Download-URL: http://code.google.com/p/feedparser/
|
||||
Description: UNKNOWN
|
||||
Keywords: atom,cdf,feed,parser,rdf,rss
|
||||
Platform: POSIX
|
||||
Platform: Windows
|
||||
Classifier: Development Status :: 5 - Production/Stable
|
||||
Classifier: Intended Audience :: Developers
|
||||
Classifier: License :: OSI Approved
|
||||
Classifier: Operating System :: OS Independent
|
||||
Classifier: Programming Language :: Python
|
||||
Classifier: Programming Language :: Python :: 2
|
||||
Classifier: Programming Language :: Python :: 2.4
|
||||
Classifier: Programming Language :: Python :: 2.5
|
||||
Classifier: Programming Language :: Python :: 2.6
|
||||
Classifier: Programming Language :: Python :: 2.7
|
||||
Classifier: Programming Language :: Python :: 3
|
||||
Classifier: Programming Language :: Python :: 3.0
|
||||
Classifier: Programming Language :: Python :: 3.1
|
||||
Classifier: Programming Language :: Python :: 3.2
|
||||
Classifier: Programming Language :: Python :: 3.3
|
||||
Classifier: Topic :: Software Development :: Libraries :: Python Modules
|
||||
Classifier: Topic :: Text Processing :: Markup :: XML
|
|
@ -1 +0,0 @@
|
|||
|
|
@ -1 +0,0 @@
|
|||
feedparser
|
|
@ -1,859 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
|
||||
__author__ = "Mark Pilgrim <http://diveintomark.org/>"
|
||||
__license__ = """
|
||||
Copyright (c) 2010-2012 Kurt McKee <contactme@kurtmckee.org>
|
||||
Copyright (c) 2004-2008 Mark Pilgrim
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without modification,
|
||||
are permitted provided that the following conditions are met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright notice,
|
||||
this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 'AS IS'
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
|
||||
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
|
||||
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
|
||||
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
|
||||
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||
POSSIBILITY OF SUCH DAMAGE."""
|
||||
|
||||
import codecs
|
||||
import datetime
|
||||
import glob
|
||||
import operator
|
||||
import os
|
||||
import posixpath
|
||||
import pprint
|
||||
import re
|
||||
import struct
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
import unittest
|
||||
import urllib
|
||||
import warnings
|
||||
import zlib
|
||||
import BaseHTTPServer
|
||||
import SimpleHTTPServer
|
||||
|
||||
import feedparser
|
||||
|
||||
if not feedparser._XML_AVAILABLE:
|
||||
sys.stderr.write('No XML parsers available, unit testing can not proceed\n')
|
||||
sys.exit(1)
|
||||
|
||||
try:
|
||||
# the utf_32 codec was introduced in Python 2.6; it's necessary to
|
||||
# check this as long as feedparser supports Python 2.4 and 2.5
|
||||
codecs.lookup('utf_32')
|
||||
except LookupError:
|
||||
_UTF32_AVAILABLE = False
|
||||
else:
|
||||
_UTF32_AVAILABLE = True
|
||||
|
||||
_s2bytes = feedparser._s2bytes
|
||||
_l2bytes = feedparser._l2bytes
|
||||
|
||||
#---------- custom HTTP server (used to serve test feeds) ----------
|
||||
|
||||
_PORT = 8097 # not really configurable, must match hardcoded port in tests
|
||||
_HOST = '127.0.0.1' # also not really configurable
|
||||
|
||||
class FeedParserTestRequestHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
|
||||
headers_re = re.compile(_s2bytes(r"^Header:\s+([^:]+):(.+)$"), re.MULTILINE)
|
||||
|
||||
def send_head(self):
|
||||
"""Send custom headers defined in test case
|
||||
|
||||
Example:
|
||||
<!--
|
||||
Header: Content-type: application/atom+xml
|
||||
Header: X-Foo: bar
|
||||
-->
|
||||
"""
|
||||
# Short-circuit the HTTP status test `test_redirect_to_304()`
|
||||
if self.path == '/-/return-304.xml':
|
||||
self.send_response(304)
|
||||
self.send_header('Content-type', 'text/xml')
|
||||
self.end_headers()
|
||||
return feedparser._StringIO(u''.encode('utf-8'))
|
||||
path = self.translate_path(self.path)
|
||||
# the compression tests' filenames determine the header sent
|
||||
if self.path.startswith('/tests/compression'):
|
||||
if self.path.endswith('gz'):
|
||||
headers = {'Content-Encoding': 'gzip'}
|
||||
else:
|
||||
headers = {'Content-Encoding': 'deflate'}
|
||||
headers['Content-type'] = 'application/xml'
|
||||
else:
|
||||
headers = dict([(k.decode('utf-8'), v.decode('utf-8').strip()) for k, v in self.headers_re.findall(open(path, 'rb').read())])
|
||||
f = open(path, 'rb')
|
||||
if (self.headers.get('if-modified-since') == headers.get('Last-Modified', 'nom')) \
|
||||
or (self.headers.get('if-none-match') == headers.get('ETag', 'nomatch')):
|
||||
status = 304
|
||||
else:
|
||||
status = 200
|
||||
headers.setdefault('Status', status)
|
||||
self.send_response(int(headers['Status']))
|
||||
headers.setdefault('Content-type', self.guess_type(path))
|
||||
self.send_header("Content-type", headers['Content-type'])
|
||||
self.send_header("Content-Length", str(os.stat(f.name)[6]))
|
||||
for k, v in headers.items():
|
||||
if k not in ('Status', 'Content-type'):
|
||||
self.send_header(k, v)
|
||||
self.end_headers()
|
||||
return f
|
||||
|
||||
def log_request(self, *args):
|
||||
pass
|
||||
|
||||
class FeedParserTestServer(threading.Thread):
|
||||
"""HTTP Server that runs in a thread and handles a predetermined number of requests"""
|
||||
|
||||
def __init__(self, requests):
|
||||
threading.Thread.__init__(self)
|
||||
self.requests = requests
|
||||
self.ready = threading.Event()
|
||||
|
||||
def run(self):
|
||||
self.httpd = BaseHTTPServer.HTTPServer((_HOST, _PORT), FeedParserTestRequestHandler)
|
||||
self.ready.set()
|
||||
while self.requests:
|
||||
self.httpd.handle_request()
|
||||
self.requests -= 1
|
||||
self.ready.clear()
|
||||
|
||||
#---------- dummy test case class (test methods are added dynamically) ----------
|
||||
unicode1_re = re.compile(_s2bytes(" u'"))
|
||||
unicode2_re = re.compile(_s2bytes(' u"'))
|
||||
|
||||
# _bytes is only used in everythingIsUnicode().
|
||||
# In Python 2 it's str, and in Python 3 it's bytes.
|
||||
_bytes = type(_s2bytes(''))
|
||||
|
||||
def everythingIsUnicode(d):
|
||||
"""Takes a dictionary, recursively verifies that every value is unicode"""
|
||||
for k, v in d.iteritems():
|
||||
if isinstance(v, dict) and k != 'headers':
|
||||
if not everythingIsUnicode(v):
|
||||
return False
|
||||
elif isinstance(v, list):
|
||||
for i in v:
|
||||
if isinstance(i, dict) and not everythingIsUnicode(i):
|
||||
return False
|
||||
elif isinstance(i, _bytes):
|
||||
return False
|
||||
elif isinstance(v, _bytes):
|
||||
return False
|
||||
return True
|
||||
|
||||
def failUnlessEval(self, xmlfile, evalString, msg=None):
|
||||
"""Fail unless eval(evalString, env)"""
|
||||
env = feedparser.parse(xmlfile)
|
||||
try:
|
||||
if not eval(evalString, globals(), env):
|
||||
failure=(msg or 'not eval(%s) \nWITH env(%s)' % (evalString, pprint.pformat(env)))
|
||||
raise self.failureException, failure
|
||||
if not everythingIsUnicode(env):
|
||||
raise self.failureException, "not everything is unicode \nWITH env(%s)" % (pprint.pformat(env), )
|
||||
except SyntaxError:
|
||||
# Python 3 doesn't have the `u""` syntax, so evalString needs to be modified,
|
||||
# which will require the failure message to be updated
|
||||
evalString = re.sub(unicode1_re, _s2bytes(" '"), evalString)
|
||||
evalString = re.sub(unicode2_re, _s2bytes(' "'), evalString)
|
||||
if not eval(evalString, globals(), env):
|
||||
failure=(msg or 'not eval(%s) \nWITH env(%s)' % (evalString, pprint.pformat(env)))
|
||||
raise self.failureException, failure
|
||||
|
||||
class BaseTestCase(unittest.TestCase):
|
||||
failUnlessEval = failUnlessEval
|
||||
|
||||
class TestCase(BaseTestCase):
|
||||
pass
|
||||
|
||||
class TestTemporaryFallbackBehavior(unittest.TestCase):
|
||||
"These tests are temporarily here because of issues 310 and 328"
|
||||
def test_issue_328_fallback_behavior(self):
|
||||
warnings.filterwarnings('error')
|
||||
|
||||
d = feedparser.FeedParserDict()
|
||||
d['published'] = u'pub string'
|
||||
d['published_parsed'] = u'pub tuple'
|
||||
d['updated'] = u'upd string'
|
||||
d['updated_parsed'] = u'upd tuple'
|
||||
# Ensure that `updated` doesn't map to `published` when it exists
|
||||
self.assertTrue('published' in d)
|
||||
self.assertTrue('published_parsed' in d)
|
||||
self.assertTrue('updated' in d)
|
||||
self.assertTrue('updated_parsed' in d)
|
||||
self.assertEqual(d['published'], 'pub string')
|
||||
self.assertEqual(d['published_parsed'], 'pub tuple')
|
||||
self.assertEqual(d['updated'], 'upd string')
|
||||
self.assertEqual(d['updated_parsed'], 'upd tuple')
|
||||
|
||||
d = feedparser.FeedParserDict()
|
||||
d['published'] = u'pub string'
|
||||
d['published_parsed'] = u'pub tuple'
|
||||
# Ensure that `updated` doesn't actually exist
|
||||
self.assertTrue('updated' not in d)
|
||||
self.assertTrue('updated_parsed' not in d)
|
||||
# Ensure that accessing `updated` throws a DeprecationWarning
|
||||
try:
|
||||
d['updated']
|
||||
except DeprecationWarning:
|
||||
# Expected behavior
|
||||
pass
|
||||
else:
|
||||
# Wrong behavior
|
||||
self.assertEqual(True, False)
|
||||
try:
|
||||
d['updated_parsed']
|
||||
except DeprecationWarning:
|
||||
# Expected behavior
|
||||
pass
|
||||
else:
|
||||
# Wrong behavior
|
||||
self.assertEqual(True, False)
|
||||
# Ensure that `updated` maps to `published`
|
||||
warnings.filterwarnings('ignore')
|
||||
self.assertEqual(d['updated'], u'pub string')
|
||||
self.assertEqual(d['updated_parsed'], u'pub tuple')
|
||||
warnings.resetwarnings()
|
||||
|
||||
|
||||
class TestEverythingIsUnicode(unittest.TestCase):
|
||||
"Ensure that `everythingIsUnicode()` is working appropriately"
|
||||
def test_everything_is_unicode(self):
|
||||
self.assertTrue(everythingIsUnicode(
|
||||
{'a': u'a', 'b': [u'b', {'c': u'c'}], 'd': {'e': u'e'}}
|
||||
))
|
||||
def test_not_everything_is_unicode(self):
|
||||
self.assertFalse(everythingIsUnicode({'a': _s2bytes('a')}))
|
||||
self.assertFalse(everythingIsUnicode({'a': [_s2bytes('a')]}))
|
||||
self.assertFalse(everythingIsUnicode({'a': {'b': _s2bytes('b')}}))
|
||||
self.assertFalse(everythingIsUnicode({'a': [{'b': _s2bytes('b')}]}))
|
||||
|
||||
class TestLooseParser(BaseTestCase):
|
||||
"Test the sgmllib-based parser by manipulating feedparser " \
|
||||
"into believing no XML parsers are installed"
|
||||
def __init__(self, arg):
|
||||
unittest.TestCase.__init__(self, arg)
|
||||
self._xml_available = feedparser._XML_AVAILABLE
|
||||
def setUp(self):
|
||||
feedparser._XML_AVAILABLE = 0
|
||||
def tearDown(self):
|
||||
feedparser._XML_AVAILABLE = self._xml_available
|
||||
|
||||
class TestStrictParser(BaseTestCase):
|
||||
pass
|
||||
|
||||
class TestMicroformats(BaseTestCase):
|
||||
pass
|
||||
|
||||
class TestEncodings(BaseTestCase):
|
||||
def test_doctype_replacement(self):
|
||||
"Ensure that non-ASCII-compatible encodings don't hide " \
|
||||
"disallowed ENTITY declarations"
|
||||
doc = """<?xml version="1.0" encoding="utf-16be"?>
|
||||
<!DOCTYPE feed [
|
||||
<!ENTITY exponential1 "bogus ">
|
||||
<!ENTITY exponential2 "&exponential1;&exponential1;">
|
||||
<!ENTITY exponential3 "&exponential2;&exponential2;">
|
||||
]>
|
||||
<feed><title type="html">&exponential3;</title></feed>"""
|
||||
doc = codecs.BOM_UTF16_BE + doc.encode('utf-16be')
|
||||
result = feedparser.parse(doc)
|
||||
self.assertEqual(result['feed']['title'], u'&exponential3')
|
||||
def test_gb2312_converted_to_gb18030_in_xml_encoding(self):
|
||||
# \u55de was chosen because it exists in gb18030 but not gb2312
|
||||
feed = u'''<?xml version="1.0" encoding="gb2312"?>
|
||||
<feed><title>\u55de</title></feed>'''
|
||||
result = feedparser.parse(feed.encode('gb18030'), response_headers={
|
||||
'Content-Type': 'text/xml'
|
||||
})
|
||||
self.assertEqual(result.encoding, 'gb18030')
|
||||
|
||||
class TestFeedParserDict(unittest.TestCase):
|
||||
"Ensure that FeedParserDict returns values as expected and won't crash"
|
||||
def setUp(self):
|
||||
self.d = feedparser.FeedParserDict()
|
||||
def _check_key(self, k):
|
||||
self.assertTrue(k in self.d)
|
||||
self.assertTrue(hasattr(self.d, k))
|
||||
self.assertEqual(self.d[k], 1)
|
||||
self.assertEqual(getattr(self.d, k), 1)
|
||||
def _check_no_key(self, k):
|
||||
self.assertTrue(k not in self.d)
|
||||
self.assertTrue(not hasattr(self.d, k))
|
||||
def test_empty(self):
|
||||
keys = (
|
||||
'a','entries', 'id', 'guid', 'summary', 'subtitle', 'description',
|
||||
'category', 'enclosures', 'license', 'categories',
|
||||
)
|
||||
for k in keys:
|
||||
self._check_no_key(k)
|
||||
self.assertTrue('items' not in self.d)
|
||||
self.assertTrue(hasattr(self.d, 'items')) # dict.items() exists
|
||||
def test_neutral(self):
|
||||
self.d['a'] = 1
|
||||
self._check_key('a')
|
||||
def test_single_mapping_target_1(self):
|
||||
self.d['id'] = 1
|
||||
self._check_key('id')
|
||||
self._check_key('guid')
|
||||
def test_single_mapping_target_2(self):
|
||||
self.d['guid'] = 1
|
||||
self._check_key('id')
|
||||
self._check_key('guid')
|
||||
def test_multiple_mapping_target_1(self):
|
||||
self.d['summary'] = 1
|
||||
self._check_key('summary')
|
||||
self._check_key('description')
|
||||
def test_multiple_mapping_target_2(self):
|
||||
self.d['subtitle'] = 1
|
||||
self._check_key('subtitle')
|
||||
self._check_key('description')
|
||||
def test_multiple_mapping_mapped_key(self):
|
||||
self.d['description'] = 1
|
||||
self._check_key('summary')
|
||||
self._check_key('description')
|
||||
def test_license(self):
|
||||
self.d['links'] = []
|
||||
try:
|
||||
self.d['license']
|
||||
self.assertTrue(False)
|
||||
except KeyError:
|
||||
pass
|
||||
self.d['links'].append({'rel': 'license'})
|
||||
try:
|
||||
self.d['license']
|
||||
self.assertTrue(False)
|
||||
except KeyError:
|
||||
pass
|
||||
self.d['links'].append({'rel': 'license', 'href': 'http://dom.test/'})
|
||||
self.assertEqual(self.d['license'], 'http://dom.test/')
|
||||
def test_category(self):
|
||||
self.d['tags'] = []
|
||||
try:
|
||||
self.d['category']
|
||||
self.assertTrue(False)
|
||||
except KeyError:
|
||||
pass
|
||||
self.d['tags'] = [{}]
|
||||
try:
|
||||
self.d['category']
|
||||
self.assertTrue(False)
|
||||
except KeyError:
|
||||
pass
|
||||
self.d['tags'] = [{'term': 'cat'}]
|
||||
self.assertEqual(self.d['category'], 'cat')
|
||||
self.d['tags'].append({'term': 'dog'})
|
||||
self.assertEqual(self.d['category'], 'cat')
|
||||
|
||||
class TestOpenResource(unittest.TestCase):
|
||||
"Ensure that `_open_resource()` interprets its arguments as URIs, " \
|
||||
"file-like objects, or in-memory feeds as expected"
|
||||
def test_fileobj(self):
|
||||
r = feedparser._open_resource(sys.stdin, '', '', '', '', [], {})
|
||||
self.assertTrue(r is sys.stdin)
|
||||
def test_feed(self):
|
||||
f = feedparser.parse(u'feed://localhost:8097/tests/http/target.xml')
|
||||
self.assertEqual(f.href, u'http://localhost:8097/tests/http/target.xml')
|
||||
def test_feed_http(self):
|
||||
f = feedparser.parse(u'feed:http://localhost:8097/tests/http/target.xml')
|
||||
self.assertEqual(f.href, u'http://localhost:8097/tests/http/target.xml')
|
||||
def test_bytes(self):
|
||||
s = '<feed><item><title>text</title></item></feed>'.encode('utf-8')
|
||||
r = feedparser._open_resource(s, '', '', '', '', [], {})
|
||||
self.assertEqual(s, r.read())
|
||||
def test_string(self):
|
||||
s = '<feed><item><title>text</title></item></feed>'
|
||||
r = feedparser._open_resource(s, '', '', '', '', [], {})
|
||||
self.assertEqual(s.encode('utf-8'), r.read())
|
||||
def test_unicode_1(self):
|
||||
s = u'<feed><item><title>text</title></item></feed>'
|
||||
r = feedparser._open_resource(s, '', '', '', '', [], {})
|
||||
self.assertEqual(s.encode('utf-8'), r.read())
|
||||
def test_unicode_2(self):
|
||||
s = u'<feed><item><title>t\u00e9xt</title></item></feed>'
|
||||
r = feedparser._open_resource(s, '', '', '', '', [], {})
|
||||
self.assertEqual(s.encode('utf-8'), r.read())
|
||||
|
||||
class TestMakeSafeAbsoluteURI(unittest.TestCase):
|
||||
"Exercise the URI joining and sanitization code"
|
||||
base = u'http://d.test/d/f.ext'
|
||||
def _mktest(rel, expect, doc):
|
||||
def fn(self):
|
||||
value = feedparser._makeSafeAbsoluteURI(self.base, rel)
|
||||
self.assertEqual(value, expect)
|
||||
fn.__doc__ = doc
|
||||
return fn
|
||||
|
||||
# make the test cases; the call signature is:
|
||||
# (relative_url, expected_return_value, test_doc_string)
|
||||
test_abs = _mktest(u'https://s.test/', u'https://s.test/', 'absolute uri')
|
||||
test_rel = _mktest(u'/new', u'http://d.test/new', 'relative uri')
|
||||
test_bad = _mktest(u'x://bad.test/', u'', 'unacceptable uri protocol')
|
||||
test_mag = _mktest(u'magnet:?xt=a', u'magnet:?xt=a', 'magnet uri')
|
||||
|
||||
def test_catch_ValueError(self):
|
||||
'catch ValueError in Python 2.7 and up'
|
||||
uri = u'http://bad]test/'
|
||||
value1 = feedparser._makeSafeAbsoluteURI(uri)
|
||||
value2 = feedparser._makeSafeAbsoluteURI(self.base, uri)
|
||||
swap = feedparser.ACCEPTABLE_URI_SCHEMES
|
||||
feedparser.ACCEPTABLE_URI_SCHEMES = ()
|
||||
value3 = feedparser._makeSafeAbsoluteURI(self.base, uri)
|
||||
feedparser.ACCEPTABLE_URI_SCHEMES = swap
|
||||
# Only Python 2.7 and up throw a ValueError, otherwise uri is returned
|
||||
self.assertTrue(value1 in (uri, u''))
|
||||
self.assertTrue(value2 in (uri, u''))
|
||||
self.assertTrue(value3 in (uri, u''))
|
||||
|
||||
class TestConvertToIdn(unittest.TestCase):
|
||||
"Test IDN support (unavailable in Jython as of Jython 2.5.2)"
|
||||
# this is the greek test domain
|
||||
hostname = u'\u03c0\u03b1\u03c1\u03ac\u03b4\u03b5\u03b9\u03b3\u03bc\u03b1'
|
||||
hostname += u'.\u03b4\u03bf\u03ba\u03b9\u03bc\u03ae'
|
||||
def test_control(self):
|
||||
r = feedparser._convert_to_idn(u'http://example.test/')
|
||||
self.assertEqual(r, u'http://example.test/')
|
||||
def test_idn(self):
|
||||
r = feedparser._convert_to_idn(u'http://%s/' % (self.hostname,))
|
||||
self.assertEqual(r, u'http://xn--hxajbheg2az3al.xn--jxalpdlp/')
|
||||
def test_port(self):
|
||||
r = feedparser._convert_to_idn(u'http://%s:8080/' % (self.hostname,))
|
||||
self.assertEqual(r, u'http://xn--hxajbheg2az3al.xn--jxalpdlp:8080/')
|
||||
|
||||
class TestCompression(unittest.TestCase):
|
||||
"Test the gzip and deflate support in the HTTP code"
|
||||
def test_gzip_good(self):
|
||||
f = feedparser.parse('http://localhost:8097/tests/compression/gzip.gz')
|
||||
self.assertEqual(f.version, 'atom10')
|
||||
def test_gzip_not_compressed(self):
|
||||
f = feedparser.parse('http://localhost:8097/tests/compression/gzip-not-compressed.gz')
|
||||
self.assertEqual(f.bozo, 1)
|
||||
self.assertTrue(isinstance(f.bozo_exception, IOError))
|
||||
self.assertEqual(f['feed']['title'], 'gzip')
|
||||
def test_gzip_struct_error(self):
|
||||
f = feedparser.parse('http://localhost:8097/tests/compression/gzip-struct-error.gz')
|
||||
self.assertEqual(f.bozo, 1)
|
||||
self.assertTrue(isinstance(f.bozo_exception, struct.error))
|
||||
def test_zlib_good(self):
|
||||
f = feedparser.parse('http://localhost:8097/tests/compression/deflate.z')
|
||||
self.assertEqual(f.version, 'atom10')
|
||||
def test_zlib_no_headers(self):
|
||||
f = feedparser.parse('http://localhost:8097/tests/compression/deflate-no-headers.z')
|
||||
self.assertEqual(f.version, 'atom10')
|
||||
def test_zlib_not_compressed(self):
|
||||
f = feedparser.parse('http://localhost:8097/tests/compression/deflate-not-compressed.z')
|
||||
self.assertEqual(f.bozo, 1)
|
||||
self.assertTrue(isinstance(f.bozo_exception, zlib.error))
|
||||
self.assertEqual(f['feed']['title'], 'deflate')
|
||||
|
||||
class TestHTTPStatus(unittest.TestCase):
|
||||
"Test HTTP redirection and other status codes"
|
||||
def test_301(self):
|
||||
f = feedparser.parse('http://localhost:8097/tests/http/http_status_301.xml')
|
||||
self.assertEqual(f.status, 301)
|
||||
self.assertEqual(f.href, 'http://localhost:8097/tests/http/target.xml')
|
||||
self.assertEqual(f.entries[0].title, 'target')
|
||||
def test_302(self):
|
||||
f = feedparser.parse('http://localhost:8097/tests/http/http_status_302.xml')
|
||||
self.assertEqual(f.status, 302)
|
||||
self.assertEqual(f.href, 'http://localhost:8097/tests/http/target.xml')
|
||||
self.assertEqual(f.entries[0].title, 'target')
|
||||
def test_303(self):
|
||||
f = feedparser.parse('http://localhost:8097/tests/http/http_status_303.xml')
|
||||
self.assertEqual(f.status, 303)
|
||||
self.assertEqual(f.href, 'http://localhost:8097/tests/http/target.xml')
|
||||
self.assertEqual(f.entries[0].title, 'target')
|
||||
def test_307(self):
|
||||
f = feedparser.parse('http://localhost:8097/tests/http/http_status_307.xml')
|
||||
self.assertEqual(f.status, 307)
|
||||
self.assertEqual(f.href, 'http://localhost:8097/tests/http/target.xml')
|
||||
self.assertEqual(f.entries[0].title, 'target')
|
||||
def test_304(self):
|
||||
# first retrieve the url
|
||||
u = 'http://localhost:8097/tests/http/http_status_304.xml'
|
||||
f = feedparser.parse(u)
|
||||
self.assertEqual(f.status, 200)
|
||||
self.assertEqual(f.entries[0].title, 'title 304')
|
||||
# extract the etag and last-modified headers
|
||||
e = [v for k, v in f.headers.items() if k.lower() == 'etag'][0]
|
||||
mh = [v for k, v in f.headers.items() if k.lower() == 'last-modified'][0]
|
||||
ms = f.updated
|
||||
mt = f.updated_parsed
|
||||
md = datetime.datetime(*mt[0:7])
|
||||
self.assertTrue(isinstance(mh, basestring))
|
||||
self.assertTrue(isinstance(ms, basestring))
|
||||
self.assertTrue(isinstance(mt, time.struct_time))
|
||||
self.assertTrue(isinstance(md, datetime.datetime))
|
||||
# test that sending back the etag results in a 304
|
||||
f = feedparser.parse(u, etag=e)
|
||||
self.assertEqual(f.status, 304)
|
||||
# test that sending back last-modified (string) results in a 304
|
||||
f = feedparser.parse(u, modified=ms)
|
||||
self.assertEqual(f.status, 304)
|
||||
# test that sending back last-modified (9-tuple) results in a 304
|
||||
f = feedparser.parse(u, modified=mt)
|
||||
self.assertEqual(f.status, 304)
|
||||
# test that sending back last-modified (datetime) results in a 304
|
||||
f = feedparser.parse(u, modified=md)
|
||||
self.assertEqual(f.status, 304)
|
||||
def test_404(self):
|
||||
f = feedparser.parse('http://localhost:8097/tests/http/http_status_404.xml')
|
||||
self.assertEqual(f.status, 404)
|
||||
def test_9001(self):
|
||||
f = feedparser.parse('http://localhost:8097/tests/http/http_status_9001.xml')
|
||||
self.assertEqual(f.bozo, 1)
|
||||
def test_redirect_to_304(self):
|
||||
# ensure that an http redirect to an http 304 doesn't
|
||||
# trigger a bozo_exception
|
||||
u = 'http://localhost:8097/tests/http/http_redirect_to_304.xml'
|
||||
f = feedparser.parse(u)
|
||||
self.assertTrue(f.bozo == 0)
|
||||
self.assertTrue(f.status == 302)
|
||||
|
||||
class TestDateParsers(unittest.TestCase):
|
||||
"Test the various date parsers; most of the test cases are constructed " \
|
||||
"dynamically based on the contents of the `date_tests` dict, below"
|
||||
def test_None(self):
|
||||
self.assertTrue(feedparser._parse_date(None) is None)
|
||||
def _check_date(self, func, dtstring, dttuple):
|
||||
try:
|
||||
tup = func(dtstring)
|
||||
except (OverflowError, ValueError):
|
||||
tup = None
|
||||
self.assertEqual(tup, dttuple)
|
||||
self.assertEqual(tup, feedparser._parse_date(dtstring))
|
||||
def test_year_10000_date(self):
|
||||
# On some systems this date string will trigger an OverflowError.
|
||||
# On Jython and x64 systems, however, it's interpreted just fine.
|
||||
try:
|
||||
date = feedparser._parse_date_rfc822(u'Sun, 31 Dec 9999 23:59:59 -9999')
|
||||
except OverflowError:
|
||||
date = None
|
||||
self.assertTrue(date in (None, (10000, 1, 5, 4, 38, 59, 2, 5, 0)))
|
||||
|
||||
date_tests = {
|
||||
feedparser._parse_date_greek: (
|
||||
(u'', None), # empty string
|
||||
(u'\u039a\u03c5\u03c1, 11 \u0399\u03bf\u03cd\u03bb 2004 12:00:00 EST', (2004, 7, 11, 17, 0, 0, 6, 193, 0)),
|
||||
),
|
||||
feedparser._parse_date_hungarian: (
|
||||
(u'', None), # empty string
|
||||
(u'2004-j\u00falius-13T9:15-05:00', (2004, 7, 13, 14, 15, 0, 1, 195, 0)),
|
||||
),
|
||||
feedparser._parse_date_iso8601: (
|
||||
(u'', None), # empty string
|
||||
(u'-0312', (2003, 12, 1, 0, 0, 0, 0, 335, 0)), # 2-digit year/month only variant
|
||||
(u'031231', (2003, 12, 31, 0, 0, 0, 2, 365, 0)), # 2-digit year/month/day only, no hyphens
|
||||
(u'03-12-31', (2003, 12, 31, 0, 0, 0, 2, 365, 0)), # 2-digit year/month/day only
|
||||
(u'-03-12', (2003, 12, 1, 0, 0, 0, 0, 335, 0)), # 2-digit year/month only
|
||||
(u'03335', (2003, 12, 1, 0, 0, 0, 0, 335, 0)), # 2-digit year/ordinal, no hyphens
|
||||
(u'2003-12-31T10:14:55.1234Z', (2003, 12, 31, 10, 14, 55, 2, 365, 0)), # fractional seconds
|
||||
# Special case for Google's extra zero in the month
|
||||
(u'2003-012-31T10:14:55+00:00', (2003, 12, 31, 10, 14, 55, 2, 365, 0)),
|
||||
),
|
||||
feedparser._parse_date_nate: (
|
||||
(u'', None), # empty string
|
||||
(u'2004-05-25 \uc624\ud6c4 11:23:17', (2004, 5, 25, 14, 23, 17, 1, 146, 0)),
|
||||
),
|
||||
feedparser._parse_date_onblog: (
|
||||
(u'', None), # empty string
|
||||
(u'2004\ub144 05\uc6d4 28\uc77c 01:31:15', (2004, 5, 27, 16, 31, 15, 3, 148, 0)),
|
||||
),
|
||||
feedparser._parse_date_perforce: (
|
||||
(u'', None), # empty string
|
||||
(u'Fri, 2006/09/15 08:19:53 EDT', (2006, 9, 15, 12, 19, 53, 4, 258, 0)),
|
||||
),
|
||||
feedparser._parse_date_rfc822: (
|
||||
(u'', None), # empty string
|
||||
(u'Thu, 01 Jan 0100 00:00:01 +0100', (99, 12, 31, 23, 0, 1, 3, 365, 0)), # ancient date
|
||||
(u'Thu, 01 Jan 04 19:48:21 GMT', (2004, 1, 1, 19, 48, 21, 3, 1, 0)), # 2-digit year
|
||||
(u'Thu, 01 Jan 2004 19:48:21 GMT', (2004, 1, 1, 19, 48, 21, 3, 1, 0)), # 4-digit year
|
||||
(u'Thu, 5 Apr 2012 10:00:00 GMT', (2012, 4, 5, 10, 0, 0, 3, 96, 0)), # 1-digit day
|
||||
(u'Wed, 19 Aug 2009 18:28:00 Etc/GMT', (2009, 8, 19, 18, 28, 0, 2, 231, 0)), # etc/gmt timezone
|
||||
(u'Wed, 19 Feb 2012 22:40:00 GMT-01:01', (2012, 2, 19, 23, 41, 0, 6, 50, 0)), # gmt+hh:mm timezone
|
||||
(u'Mon, 13 Feb, 2012 06:28:00 UTC', (2012, 2, 13, 6, 28, 0, 0, 44, 0)), # extraneous comma
|
||||
(u'Thu, 01 Jan 2004 00:00 GMT', (2004, 1, 1, 0, 0, 0, 3, 1, 0)), # no seconds
|
||||
(u'Thu, 01 Jan 2004', (2004, 1, 1, 0, 0, 0, 3, 1, 0)), # no time
|
||||
# Additional tests to handle Disney's long month names and invalid timezones
|
||||
(u'Mon, 26 January 2004 16:31:00 AT', (2004, 1, 26, 20, 31, 0, 0, 26, 0)),
|
||||
(u'Mon, 26 January 2004 16:31:00 ET', (2004, 1, 26, 21, 31, 0, 0, 26, 0)),
|
||||
(u'Mon, 26 January 2004 16:31:00 CT', (2004, 1, 26, 22, 31, 0, 0, 26, 0)),
|
||||
(u'Mon, 26 January 2004 16:31:00 MT', (2004, 1, 26, 23, 31, 0, 0, 26, 0)),
|
||||
(u'Mon, 26 January 2004 16:31:00 PT', (2004, 1, 27, 0, 31, 0, 1, 27, 0)),
|
||||
),
|
||||
feedparser._parse_date_rfc822_grubby: (
|
||||
(u'Thu Aug 30 2012 17:26:16 +0200', (2012, 8, 30, 15, 26, 16, 3, 243, 0)),
|
||||
),
|
||||
feedparser._parse_date_asctime: (
|
||||
(u'Sun Jan 4 16:29:06 2004', (2004, 1, 4, 16, 29, 6, 6, 4, 0)),
|
||||
),
|
||||
feedparser._parse_date_w3dtf: (
|
||||
(u'', None), # empty string
|
||||
(u'2003-12-31T10:14:55Z', (2003, 12, 31, 10, 14, 55, 2, 365, 0)), # UTC
|
||||
(u'2003-12-31T10:14:55-08:00', (2003, 12, 31, 18, 14, 55, 2, 365, 0)), # San Francisco timezone
|
||||
(u'2003-12-31T18:14:55+08:00', (2003, 12, 31, 10, 14, 55, 2, 365, 0)), # Tokyo timezone
|
||||
(u'2007-04-23T23:25:47.538+10:00', (2007, 4, 23, 13, 25, 47, 0, 113, 0)), # fractional seconds
|
||||
(u'2003-12-31', (2003, 12, 31, 0, 0, 0, 2, 365, 0)), # year/month/day only
|
||||
(u'20031231', (2003, 12, 31, 0, 0, 0, 2, 365, 0)), # year/month/day only, no hyphens
|
||||
(u'2003-12', (2003, 12, 1, 0, 0, 0, 0, 335, 0)), # year/month only
|
||||
(u'2003', (2003, 1, 1, 0, 0, 0, 2, 1, 0)), # year only
|
||||
# MSSQL-style dates
|
||||
(u'2004-07-08 23:56:58 -00:20', (2004, 7, 9, 0, 16, 58, 4, 191, 0)), # with timezone
|
||||
(u'2004-07-08 23:56:58', (2004, 7, 8, 23, 56, 58, 3, 190, 0)), # without timezone
|
||||
(u'2004-07-08 23:56:58.0', (2004, 7, 8, 23, 56, 58, 3, 190, 0)), # with fractional second
|
||||
# Special cases for out-of-range times
|
||||
(u'2003-12-31T25:14:55Z', (2004, 1, 1, 1, 14, 55, 3, 1, 0)), # invalid (25 hours)
|
||||
(u'2003-12-31T10:61:55Z', (2003, 12, 31, 11, 1, 55, 2, 365, 0)), # invalid (61 minutes)
|
||||
(u'2003-12-31T10:14:61Z', (2003, 12, 31, 10, 15, 1, 2, 365, 0)), # invalid (61 seconds)
|
||||
# Special cases for rollovers in leap years
|
||||
(u'2004-02-28T18:14:55-08:00', (2004, 2, 29, 2, 14, 55, 6, 60, 0)), # feb 28 in leap year
|
||||
(u'2003-02-28T18:14:55-08:00', (2003, 3, 1, 2, 14, 55, 5, 60, 0)), # feb 28 in non-leap year
|
||||
(u'2000-02-28T18:14:55-08:00', (2000, 2, 29, 2, 14, 55, 1, 60, 0)), # feb 28 in leap year on century divisible by 400
|
||||
)
|
||||
}
|
||||
|
||||
def make_date_test(f, s, t):
|
||||
return lambda self: self._check_date(f, s, t)
|
||||
|
||||
for func, items in date_tests.iteritems():
|
||||
for i, (dtstring, dttuple) in enumerate(items):
|
||||
uniqfunc = make_date_test(func, dtstring, dttuple)
|
||||
setattr(TestDateParsers, 'test_%s_%02i' % (func.__name__, i), uniqfunc)
|
||||
|
||||
|
||||
class TestHTMLGuessing(unittest.TestCase):
|
||||
"Exercise the HTML sniffing code"
|
||||
def _mktest(text, expect, doc):
|
||||
def fn(self):
|
||||
value = bool(feedparser._FeedParserMixin.lookslikehtml(text))
|
||||
self.assertEqual(value, expect)
|
||||
fn.__doc__ = doc
|
||||
return fn
|
||||
|
||||
test_text_1 = _mktest(u'plain text', False, u'plain text')
|
||||
test_text_2 = _mktest(u'2 < 3', False, u'plain text with angle bracket')
|
||||
test_html_1 = _mktest(u'<a href="">a</a>', True, u'anchor tag')
|
||||
test_html_2 = _mktest(u'<i>i</i>', True, u'italics tag')
|
||||
test_html_3 = _mktest(u'<b>b</b>', True, u'bold tag')
|
||||
test_html_4 = _mktest(u'<code>', False, u'allowed tag, no end tag')
|
||||
test_html_5 = _mktest(u'<rss> .. </rss>', False, u'disallowed tag')
|
||||
test_entity_1 = _mktest(u'AT&T', False, u'corporation name')
|
||||
test_entity_2 = _mktest(u'©', True, u'named entity reference')
|
||||
test_entity_3 = _mktest(u'©', True, u'numeric entity reference')
|
||||
test_entity_4 = _mktest(u'©', True, u'hex numeric entity reference')
|
||||
|
||||
#---------- additional api unit tests, not backed by files
|
||||
|
||||
class TestBuildRequest(unittest.TestCase):
|
||||
"Test that HTTP request objects are created as expected"
|
||||
def test_extra_headers(self):
|
||||
"""You can pass in extra headers and they go into the request object."""
|
||||
|
||||
request = feedparser._build_urllib2_request(
|
||||
'http://example.com/feed',
|
||||
'agent-name',
|
||||
None, None, None, None,
|
||||
{'Cache-Control': 'max-age=0'})
|
||||
# nb, urllib2 folds the case of the headers
|
||||
self.assertEqual(
|
||||
request.get_header('Cache-control'), 'max-age=0')
|
||||
|
||||
|
||||
class TestLxmlBug(unittest.TestCase):
|
||||
def test_lxml_etree_bug(self):
|
||||
try:
|
||||
import lxml.etree
|
||||
except ImportError:
|
||||
pass
|
||||
else:
|
||||
doc = u"<feed>&illformed_charref</feed>".encode('utf8')
|
||||
# Importing lxml.etree currently causes libxml2 to
|
||||
# throw SAXException instead of SAXParseException.
|
||||
feedparser.parse(feedparser._StringIO(doc))
|
||||
self.assertTrue(True)
|
||||
|
||||
#---------- parse test files and create test methods ----------
|
||||
def convert_to_utf8(data):
|
||||
"Identify data's encoding using its byte order mark" \
|
||||
"and convert it to its utf-8 equivalent"
|
||||
if data[:4] == _l2bytes([0x4c, 0x6f, 0xa7, 0x94]):
|
||||
return data.decode('cp037').encode('utf-8')
|
||||
elif data[:4] == _l2bytes([0x00, 0x00, 0xfe, 0xff]):
|
||||
if not _UTF32_AVAILABLE:
|
||||
return None
|
||||
return data.decode('utf-32be').encode('utf-8')
|
||||
elif data[:4] == _l2bytes([0xff, 0xfe, 0x00, 0x00]):
|
||||
if not _UTF32_AVAILABLE:
|
||||
return None
|
||||
return data.decode('utf-32le').encode('utf-8')
|
||||
elif data[:4] == _l2bytes([0x00, 0x00, 0x00, 0x3c]):
|
||||
if not _UTF32_AVAILABLE:
|
||||
return None
|
||||
return data.decode('utf-32be').encode('utf-8')
|
||||
elif data[:4] == _l2bytes([0x3c, 0x00, 0x00, 0x00]):
|
||||
if not _UTF32_AVAILABLE:
|
||||
return None
|
||||
return data.decode('utf-32le').encode('utf-8')
|
||||
elif data[:4] == _l2bytes([0x00, 0x3c, 0x00, 0x3f]):
|
||||
return data.decode('utf-16be').encode('utf-8')
|
||||
elif data[:4] == _l2bytes([0x3c, 0x00, 0x3f, 0x00]):
|
||||
return data.decode('utf-16le').encode('utf-8')
|
||||
elif (data[:2] == _l2bytes([0xfe, 0xff])) and (data[2:4] != _l2bytes([0x00, 0x00])):
|
||||
return data[2:].decode('utf-16be').encode('utf-8')
|
||||
elif (data[:2] == _l2bytes([0xff, 0xfe])) and (data[2:4] != _l2bytes([0x00, 0x00])):
|
||||
return data[2:].decode('utf-16le').encode('utf-8')
|
||||
elif data[:3] == _l2bytes([0xef, 0xbb, 0xbf]):
|
||||
return data[3:]
|
||||
# no byte order mark was found
|
||||
return data
|
||||
|
||||
skip_re = re.compile(_s2bytes("SkipUnless:\s*(.*?)\n"))
|
||||
desc_re = re.compile(_s2bytes("Description:\s*(.*?)\s*Expect:\s*(.*)\s*-->"))
|
||||
def getDescription(xmlfile, data):
|
||||
"""Extract test data
|
||||
|
||||
Each test case is an XML file which contains not only a test feed
|
||||
but also the description of the test and the condition that we
|
||||
would expect the parser to create when it parses the feed. Example:
|
||||
<!--
|
||||
Description: feed title
|
||||
Expect: feed['title'] == u'Example feed'
|
||||
-->
|
||||
"""
|
||||
skip_results = skip_re.search(data)
|
||||
if skip_results:
|
||||
skipUnless = skip_results.group(1).strip()
|
||||
else:
|
||||
skipUnless = '1'
|
||||
search_results = desc_re.search(data)
|
||||
if not search_results:
|
||||
raise RuntimeError, "can't parse %s" % xmlfile
|
||||
description, evalString = map(lambda s: s.strip(), list(search_results.groups()))
|
||||
description = xmlfile + ": " + unicode(description, 'utf8')
|
||||
return description, evalString, skipUnless
|
||||
|
||||
def buildTestCase(xmlfile, description, evalString):
|
||||
func = lambda self, xmlfile=xmlfile, evalString=evalString: \
|
||||
self.failUnlessEval(xmlfile, evalString)
|
||||
func.__doc__ = description
|
||||
return func
|
||||
|
||||
def runtests():
|
||||
"Read the files in the tests/ directory, dynamically add tests to the " \
|
||||
"TestCases above, spawn the HTTP server, and run the test suite"
|
||||
if sys.argv[1:]:
|
||||
allfiles = filter(lambda s: s.endswith('.xml'), reduce(operator.add, map(glob.glob, sys.argv[1:]), []))
|
||||
sys.argv = [sys.argv[0]] #+ sys.argv[2:]
|
||||
else:
|
||||
allfiles = glob.glob(os.path.join('.', 'tests', '**', '**', '*.xml'))
|
||||
wellformedfiles = glob.glob(os.path.join('.', 'tests', 'wellformed', '**', '*.xml'))
|
||||
illformedfiles = glob.glob(os.path.join('.', 'tests', 'illformed', '*.xml'))
|
||||
encodingfiles = glob.glob(os.path.join('.', 'tests', 'encoding', '*.xml'))
|
||||
entitiesfiles = glob.glob(os.path.join('.', 'tests', 'entities', '*.xml'))
|
||||
microformatfiles = glob.glob(os.path.join('.', 'tests', 'microformats', '**', '*.xml'))
|
||||
httpd = None
|
||||
# there are several compression test cases that must be accounted for
|
||||
# as well as a number of http status tests that redirect to a target
|
||||
# and a few `_open_resource`-related tests
|
||||
httpcount = 6 + 17 + 2
|
||||
httpcount += len([f for f in allfiles if 'http' in f])
|
||||
httpcount += len([f for f in wellformedfiles if 'http' in f])
|
||||
httpcount += len([f for f in illformedfiles if 'http' in f])
|
||||
httpcount += len([f for f in encodingfiles if 'http' in f])
|
||||
try:
|
||||
for c, xmlfile in enumerate(allfiles + encodingfiles + illformedfiles + entitiesfiles):
|
||||
addTo = TestCase
|
||||
if xmlfile in encodingfiles:
|
||||
addTo = TestEncodings
|
||||
elif xmlfile in entitiesfiles:
|
||||
addTo = (TestStrictParser, TestLooseParser)
|
||||
elif xmlfile in microformatfiles:
|
||||
addTo = TestMicroformats
|
||||
elif xmlfile in wellformedfiles:
|
||||
addTo = (TestStrictParser, TestLooseParser)
|
||||
data = open(xmlfile, 'rb').read()
|
||||
if 'encoding' in xmlfile:
|
||||
data = convert_to_utf8(data)
|
||||
if data is None:
|
||||
# convert_to_utf8 found a byte order mark for utf_32
|
||||
# but it's not supported in this installation of Python
|
||||
if 'http' in xmlfile:
|
||||
httpcount -= 1 + (xmlfile in wellformedfiles)
|
||||
continue
|
||||
description, evalString, skipUnless = getDescription(xmlfile, data)
|
||||
testName = 'test_%06d' % c
|
||||
ishttp = 'http' in xmlfile
|
||||
try:
|
||||
if not eval(skipUnless): raise NotImplementedError
|
||||
except (ImportError, LookupError, NotImplementedError, AttributeError):
|
||||
if ishttp:
|
||||
httpcount -= 1 + (xmlfile in wellformedfiles)
|
||||
continue
|
||||
if ishttp:
|
||||
xmlfile = 'http://%s:%s/%s' % (_HOST, _PORT, posixpath.normpath(xmlfile.replace('\\', '/')))
|
||||
testFunc = buildTestCase(xmlfile, description, evalString)
|
||||
if isinstance(addTo, tuple):
|
||||
setattr(addTo[0], testName, testFunc)
|
||||
setattr(addTo[1], testName, testFunc)
|
||||
else:
|
||||
setattr(addTo, testName, testFunc)
|
||||
if feedparser.TIDY_MARKUP and feedparser._mxtidy:
|
||||
sys.stderr.write('\nWarning: feedparser.TIDY_MARKUP invalidates tests, turning it off temporarily\n\n')
|
||||
feedparser.TIDY_MARKUP = 0
|
||||
if httpcount:
|
||||
httpd = FeedParserTestServer(httpcount)
|
||||
httpd.daemon = True
|
||||
httpd.start()
|
||||
httpd.ready.wait()
|
||||
testsuite = unittest.TestSuite()
|
||||
testloader = unittest.TestLoader()
|
||||
testsuite.addTest(testloader.loadTestsFromTestCase(TestCase))
|
||||
testsuite.addTest(testloader.loadTestsFromTestCase(TestStrictParser))
|
||||
testsuite.addTest(testloader.loadTestsFromTestCase(TestLooseParser))
|
||||
testsuite.addTest(testloader.loadTestsFromTestCase(TestEncodings))
|
||||
testsuite.addTest(testloader.loadTestsFromTestCase(TestDateParsers))
|
||||
testsuite.addTest(testloader.loadTestsFromTestCase(TestHTMLGuessing))
|
||||
testsuite.addTest(testloader.loadTestsFromTestCase(TestHTTPStatus))
|
||||
testsuite.addTest(testloader.loadTestsFromTestCase(TestCompression))
|
||||
testsuite.addTest(testloader.loadTestsFromTestCase(TestConvertToIdn))
|
||||
testsuite.addTest(testloader.loadTestsFromTestCase(TestMicroformats))
|
||||
testsuite.addTest(testloader.loadTestsFromTestCase(TestOpenResource))
|
||||
testsuite.addTest(testloader.loadTestsFromTestCase(TestFeedParserDict))
|
||||
testsuite.addTest(testloader.loadTestsFromTestCase(TestMakeSafeAbsoluteURI))
|
||||
testsuite.addTest(testloader.loadTestsFromTestCase(TestEverythingIsUnicode))
|
||||
testsuite.addTest(testloader.loadTestsFromTestCase(TestTemporaryFallbackBehavior))
|
||||
testsuite.addTest(testloader.loadTestsFromTestCase(TestLxmlBug))
|
||||
testresults = unittest.TextTestRunner(verbosity=1).run(testsuite)
|
||||
|
||||
# Return 0 if successful, 1 if there was a failure
|
||||
sys.exit(not testresults.wasSuccessful())
|
||||
finally:
|
||||
if httpd:
|
||||
if httpd.requests:
|
||||
# Should never get here unless something went horribly wrong, like the
|
||||
# user hitting Ctrl-C. Tell our HTTP server that it's done, then do
|
||||
# one more request to flush it. This rarely works; the combination of
|
||||
# threading, self-terminating HTTP servers, and unittest is really
|
||||
# quite flaky. Just what you want in a testing framework, no?
|
||||
httpd.requests = 0
|
||||
if httpd.ready:
|
||||
urllib.urlopen('http://127.0.0.1:8097/tests/wellformed/rss/aaa_wellformed.xml').read()
|
||||
httpd.join(0)
|
||||
|
||||
if __name__ == "__main__":
|
||||
runtests()
|
|
@ -1 +0,0 @@
|
|||
<feed><title>deflate</title></feed>
|
|
@ -1 +0,0 @@
|
|||
<feed><title>gzip</title></feed>
|
|
@ -1 +0,0 @@
|
|||
<feed xmlns="http://www.w3.org/2005/Atom"></feed>
|
|
@ -1,8 +0,0 @@
|
|||
<?xml version="1.0" encoding="big5"?>
|
||||
<!--
|
||||
SkipUnless: __import__('codecs').lookup('big5')
|
||||
Description: big5
|
||||
Expect: not bozo and encoding == 'big5'
|
||||
-->
|
||||
<rss>
|
||||
</rss>
|
|
@ -1,7 +0,0 @@
|
|||
<?xml version="1.0" encoding="bogus"?>
|
||||
<!--
|
||||
Description: bogus encoding
|
||||
Expect: bozo
|
||||
-->
|
||||
<rss>
|
||||
</rss>
|
|
@ -1,13 +0,0 @@
|
|||
<?xml version="1.0"?>
|
||||
<!--
|
||||
Description: utf-8 interpreted as iso-8859-1 and re-encoded as utf-8
|
||||
Expect: bozo and ord(entries[0]['description']) == 8230
|
||||
-->
|
||||
<rss version="2.0">
|
||||
<channel>
|
||||
<item>
|
||||
<description>…</description>
|
||||
</item>
|
||||
</channel>
|
||||
</rss>
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
<!--
|
||||
SkipUnless: __import__('sys').version.split()[0] >= '2.2.0'
|
||||
Description: crashes
|
||||
Expect: 1
|
||||
-->
|
||||
<rss>
|
||||
<item>
|
||||
<description><![CDATA[<a href="http://www.example.com/">¤</a><a href="&"></a>]]></description>
|
||||
</item>
|
||||
</rss>
|
|
@ -1,11 +0,0 @@
|
|||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<!--
|
||||
Note: text/xml defaults to us-ascii, in conflict with the XML declaration of utf-8
|
||||
Header: Content-type: text/xml
|
||||
Description: Content-type with no charset (text/xml defaults to us-ascii)
|
||||
Expect: bozo and isinstance(bozo_exception, feedparser.CharacterEncodingOverride)
|
||||
-->
|
||||
|
||||
<feed version="0.3" xmlns="http://purl.org/atom/ns#">
|
||||
<title>Iñtërnâtiônàlizætiøn</title>
|
||||
</feed>
|
|
@ -1,8 +0,0 @@
|
|||
<?xml version="1.0"?>
|
||||
<!--
|
||||
Header: Content-type: text/plain
|
||||
Description: text/plain + no encoding
|
||||
Expect: bozo
|
||||
-->
|
||||
<rss version="2.0">
|
||||
</rss>
|
|
@ -1,8 +0,0 @@
|
|||
<?xml version="1.0"?>
|
||||
<!--
|
||||
Header: Content-type: text/plain; charset=utf-8
|
||||
Description: text/plain + charset
|
||||
Expect: bozo and encoding == 'utf-8'
|
||||
-->
|
||||
<rss version="2.0">
|
||||
</rss>
|
|
@ -1,10 +0,0 @@
|
|||
<!--
|
||||
Description: Ensure when there are invalid bytes in encoding specified by BOM, feedparser doesn't crash
|
||||
Expect: bozo
|
||||
-->
|
||||
<rss version="2.0">
|
||||
<channel>
|
||||
<title>Valid UTF8: ѨInvalid UTF8: España</title>
|
||||
<description><pre class="screen"></pre></description>
|
||||
</channel>
|
||||
</rss
|