Generated reference documentation for all the public functionality of testtools.
Please send patches if you notice anything confusing or wrong, or that could be improved.
Extensions to the standard Python unittest library.
Copy a TestCase, and give the copied test a new id.
This is only expected to be used on tests that have been constructed but not executed.
Copies all event it receives to multiple results.
This provides an easy facility for combining multiple StreamResults.
For TestResult the equivalent class was MultiTestResult.
A TestSuite whose run() calls out to a concurrency strategy.
Run the tests concurrently.
This calls out to the provided make_tests helper, and then serialises the results so that result only sees activity from one TestCase at a time.
ConcurrentTestSuite provides no special mechanism to stop the tests returned by make_tests, it is up to the make_tests to honour the shouldStop attribute on the result object they are run with, which will be set if an exception is raised in the thread which ConcurrentTestSuite.run is called in.
A TestSuite whose run() parallelises.
Run the tests concurrently.
This calls out to the provided make_tests helper to determine the concurrency to use and to assign routing codes to each worker.
ConcurrentTestSuite provides no special mechanism to stop the tests returned by make_tests, it is up to the made tests to honour the shouldStop attribute on the result object they are run with, which will be set if the test run is to be aborted.
The tests are run with an ExtendedToStreamDecorator wrapped around a StreamToQueue instance. ConcurrentStreamTestSuite dequeues events from the queue and forwards them to result. Tests can therefore be either original unittest tests (or compatible tests), or new tests that emit StreamResult events directly.
Parameters: | result – A StreamResult instance. The caller is responsible for calling startTestRun on this instance prior to invoking suite.run, and stopTestRun subsequent to the run method returning. |
---|
Decorate a TestCase and permit customisation of the result for runs.
Construct an ErrorHolder.
Parameters: |
|
---|
A context manager to handle expected exceptions.
- def test_foo(self):
- with ExpectedException(ValueError, ‘fo.*’):
- raise ValueError(‘foo’)
will pass. If the raised exception has a type other than the specified type, it will be re-raised. If it has a ‘str()’ that does not match the given regular expression, an AssertionError will be raised. If no exception is raised, an AssertionError will be raised.
Permit new TestResult API code to degrade gracefully with old results.
This decorates an existing TestResult and converts missing outcomes such as addSkip to older outcomes such as addSuccess. It also supports the extended details protocol. In all cases the most recent protocol is attempted first, and fallbacks only occur when the decorated result does not support the newer style of calling.
Permit using old TestResult API code with new StreamResult objects.
This decorates a StreamResult and converts old (Python 2.6 / 2.7 / Extended) TestResult API calls into StreamResult calls.
It also supports regular StreamResult calls, making it safe to wrap around any StreamResult.
The currently set tags.
Add and remove tags from the test.
Parameters: |
|
---|
Iterate through all of the test cases in ‘test_suite_or_case’.
Represents many exceptions raised from some operation.
Variables: | args – The sys.exc_info() tuples for each exception. |
---|
A test result that dispatches to many test results.
Was this result successful?
Only returns True if every constituent result was successful.
A placeholder test.
PlaceHolder implements much of the same interface as TestCase and is particularly suitable for being added to TestResults.
Decorate a test as using a specific RunTest.
e.g.:
@run_test_with(CustomRunner, timeout=42)
def test_foo(self):
self.assertTrue(True)
The returned decorator works by setting an attribute on the decorated function. TestCase.__init__ looks for this attribute when deciding on a RunTest factory. If you wish to use multiple decorators on a test method, then you must either make this one the top-most decorator, or you must write your decorators so that they update the wrapping function with the attributes of the wrapped function. The latter is recommended style anyway. functools.wraps, functools.wrapper and twisted.python.util.mergeFunctionMetadata can help you do this.
Parameters: |
|
---|---|
Returns: | A decorator to be used for marking a test as needing a special runner. |
Tag each test individually.
Extensions to the basic TestCase.
Variables: |
|
---|
Add a cleanup function to be called after tearDown.
Functions added with addCleanup will be called in reverse order of adding after tearDown, or after setUp if setUp raises an exception.
If a function added with addCleanup raises an exception, the error will be recorded as a test error, and the next cleanup will then be run.
Cleanup functions are always called before a test finishes running, even if setUp is aborted by an exception.
Add a detail to be reported with this test’s outcome.
For more details see pydoc testtools.TestResult.
Parameters: |
|
---|
Add a detail to the test, but ensure it’s name is unique.
This method checks whether name conflicts with a detail that has already been added to the test. If it does, it will modify name to avoid the conflict.
For more details see pydoc testtools.TestResult.
Parameters: |
|
---|
Add a handler to be called when an exception occurs in test code.
This handler cannot affect what result methods are called, and is called before any outcome is called on the result object. An example use for it is to add some diagnostic state to the test details dict which is expensive to calculate and not interesting for reporting in the success case.
Handlers are called before the outcome (such as addFailure) that the exception has caused.
Handlers are called in first-added, first-called order, and if they raise an exception, that will propogate out of the test running machinery, halting test processing. As a result, do not call code that may unreasonably fail.
Assert that ‘expected’ is equal to ‘observed’.
Parameters: |
|
---|
Assert that ‘expected’ is equal to ‘observed’.
Parameters: |
|
---|
Assert that needle is in haystack.
Assert that ‘expected’ is ‘observed’.
Parameters: |
|
---|
Assert that ‘observed’ is equal to None.
Parameters: |
|
---|
Assert that ‘expected’ is not ‘observed’.
Assert that ‘observed’ is not equal to None.
Parameters: |
|
---|
Assert that needle is not in haystack.
Fail unless an exception of class excClass is thrown by callableObj when invoked with arguments args and keyword arguments kwargs. If a different type of exception is thrown, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception.
Assert that matchee is matched by matcher.
Parameters: |
|
---|---|
Raises MismatchError: | |
When matcher does not match thing. |
Check that a test fails in a particular way.
If the test fails in the expected way, a KnownFailure is caused. If it succeeds an UnexpectedSuccess is caused.
The expected use of expectFailure is as a barrier at the point in a test where the test would fail. For example: >>> def test_foo(self): >>> self.expectFailure(“1 should be 0”, self.assertNotEqual, 1, 0) >>> self.assertEqual(1, 0)
If in the future 1 were to equal 0, the expectFailure call can simply be removed. This separation preserves the original intent of the test while it is in the expectFailure mode.
Check that matchee is matched by matcher, but delay the assertion failure.
This method behaves similarly to assertThat, except that a failed match does not exit the test immediately. The rest of the test code will continue to run, and the test will be marked as failing after the test has finished.
Parameters: |
|
---|
Assert that ‘expected’ is equal to ‘observed’.
Parameters: |
|
---|
Fail unless an exception of class excClass is thrown by callableObj when invoked with arguments args and keyword arguments kwargs. If a different type of exception is thrown, it will not be caught, and the test case will be deemed to have suffered an error, exactly as for an unexpected exception.
Get the details dict that will be reported with this test’s outcome.
For more details see pydoc testtools.TestResult.
Get an integer unique to this test.
Returns an integer that is guaranteed to be unique to this instance. Use this when you need an arbitrary integer in your test, or as a helper for custom anonymous factory methods.
Get a string unique to this test.
Returns a string that is guaranteed to be unique to this instance. Use this when you need an arbitrary string in your test, or as a helper for custom anonymous factory methods.
Parameters: | prefix – The prefix of the string. If not provided, defaults to the id of the tests. |
---|---|
Returns: | A bytestring of ‘<prefix>-<unique_int>’. |
Called when an exception propogates from test code.
Seealso addOnException: | |
---|---|
Monkey-patch ‘obj.attribute’ to ‘value’ while the test is running.
If ‘obj’ has no attribute, then the monkey-patch will still go ahead, and the attribute will be deleted instead of restored to its original value.
Parameters: |
|
---|
Cause this test to be skipped.
This raises self.skipException(reason). skipException is raised to permit a skip to be triggered at any point (during setUp or the testMethod itself). The run() method catches skipException and translates that into a call to the result objects addSkip method.
Parameters: | reason – The reason why the test is being skipped. This must support being cast into a unicode string for reporting. |
---|
alias of SkipTest
Cause this test to be skipped.
This raises self.skipException(reason). skipException is raised to permit a skip to be triggered at any point (during setUp or the testMethod itself). The run() method catches skipException and translates that into a call to the result objects addSkip method.
Parameters: | reason – The reason why the test is being skipped. This must support being cast into a unicode string for reporting. |
---|
Use fixture in a test case.
The fixture will be setUp, and self.addCleanup(fixture.cleanUp) called.
Parameters: | fixture – The fixture to use. |
---|---|
Returns: | The fixture, after setting it up and scheduling a cleanup for it. |
Command to run unit tests with testtools
Call something every time a test completes.
Subclass of unittest.TestResult extending the protocol for flexability.
This test result supports an experimental protocol for providing additional data to in test outcomes. All the outcome methods take an optional dict ‘details’. If supplied any other detail parameters like ‘err’ or ‘reason’ should not be provided. The details dict is a mapping from names to MIME content objects (see testtools.content). This permits attaching tracebacks, log files, or even large objects like databases that were part of the test fixture. Until this API is accepted into upstream Python it is considered experimental: it may be replaced at any point by a newer version more in line with upstream Python. Compatibility would be aimed for in this case, but may not be possible.
Variables: | skip_reasons – A dict of skip-reasons -> list of tests. See addSkip. |
---|
Called when an error has occurred. ‘err’ is a tuple of values as returned by sys.exc_info().
Parameters: | details – Alternative way to supply details about the outcome. see the class docstring for more information. |
---|
Called when a test has failed in an expected manner.
Like with addSuccess and addError, testStopped should still be called.
Parameters: |
|
---|---|
Returns: | None |
Called when an error has occurred. ‘err’ is a tuple of values as returned by sys.exc_info().
Parameters: | details – Alternative way to supply details about the outcome. see the class docstring for more information. |
---|
Called when a test has been skipped rather than running.
Like with addSuccess and addError, testStopped should still be called.
This must be called by the TestCase. ‘addError’ and ‘addFailure’ will not call addSkip, since they have no assumptions about the kind of errors that a test can raise.
Parameters: |
|
---|---|
Returns: | None |
Called when a test succeeded.
Called when a test was expected to fail, but succeed.
The currently set tags.
Called when the test runner is done.
deprecated in favour of stopTestRun.
Called before a test run starts.
New in Python 2.7. The testtools version resets the result to a pristine condition ready for use in another test run. Note that this is different from Python 2.7’s startTestRun, which does nothing.
Called after a test run completes
New in python 2.7
Add and remove tags from the test.
Parameters: |
|
---|
Provide a timestamp to represent the current time.
This is useful when test activity is time delayed, or happening concurrently and getting the system time between API calls will not accurately represent the duration of tests (or the whole run).
Calling time() sets the datetime used by the TestResult object. Time is permitted to go backwards when using this call.
Parameters: | a_datetime – A datetime.datetime object with TZ information or None to reset the TestResult to gathering time from the system. |
---|
Has this result been successful so far?
If there have been any errors, failures or unexpected successes, return False. Otherwise, return True.
Note: This differs from standard unittest in that we consider unexpected successes to be equivalent to failures, rather than successes.
General pass-through decorator.
This provides a base that other TestResults can inherit from to gain basic forwarding functionality.
A TestResult which outputs activity to a text stream.
An object to run a test.
RunTest objects are used to implement the internal logic involved in running a test. TestCase.__init__ stores _RunTest as the class of RunTest to execute. Passing the runTest= parameter to TestCase.__init__ allows a different RunTest class to be used to execute the test.
Subclassing or replacing RunTest can be useful to add functionality to the way that tests are run in a given project.
Variables: |
|
---|
Run self.case reporting activity to result.
Parameters: | result – Optional testtools.TestResult to report activity to. |
---|---|
Returns: | The result object the test was run against. |
A decorator to skip unit tests.
This is just syntactic sugar so users don’t have to change any of their unit tests in order to migrate to python 2.7, which provides the @unittest.skip decorator.
A decorator to skip a test if the condition is true.
A decorator to skip a test unless the condition is true.
Call the supplied callback if an error is seen in a stream.
An example callback:
def do_something():
pass
A test result for reporting the activity of a test run.
Typical use
>>> result = StreamResult()
>>> result.startTestRun()
>>> try:
... case.run(result)
... finally:
... result.stopTestRun()
The case object will be either a TestCase or a TestSuite, and generally make a sequence of calls like:
>>> result.status(self.id(), 'inprogress')
>>> result.status(self.id(), 'success')
General concepts
StreamResult is built to process events that are emitted by tests during a test run or test enumeration. The test run may be running concurrently, and even be spread out across multiple machines.
All events are timestamped to prevent network buffering or scheduling latency causing false timing reports. Timestamps are datetime objects in the UTC timezone.
A route_code is a unicode string that identifies where a particular test run. This is optional in the API but very useful when multiplexing multiple streams together as it allows identification of interactions between tests that were run on the same hardware or in the same test process. Generally actual tests never need to bother with this - it is added and processed by StreamResult’s that do multiplexing / run analysis. route_codes are also used to route stdin back to pdb instances.
The StreamResult base class does no accounting or processing, rather it just provides an empty implementation of every method, suitable for use as a base class regardless of intent.
Start a test run.
This will prepare the test result to process results (which might imply connecting to a database or remote machine).
Inform the result about a test status.
Parameters: |
|
---|
Stop a test run.
This informs the result that no more test updates will be received. At this point any test ids that have started and not completed can be considered failed-or-hung.
A StreamResult that routes events.
StreamResultRouter forwards received events to another StreamResult object, selected by a dynamic forwarding policy. Events where no destination is found are forwarded to the fallback StreamResult, or an error is raised.
Typical use is to construct a router with a fallback and then either create up front mapping rules, or create them as-needed from the fallback handler:
>>> router = StreamResultRouter()
>>> sink = doubles.StreamResult()
>>> router.add_rule(sink, 'route_code_prefix', route_prefix='0',
... consume_route=True)
>>> router.status(test_id='foo', route_code='0/1', test_status='uxsuccess')
StreamResultRouter has no buffering.
When adding routes (and for the fallback) whether to call startTestRun and stopTestRun or to not call them is controllable by passing ‘do_start_stop_run’. The default is to call them for the fallback only. If a route is added after startTestRun has been called, and do_start_stop_run is True then startTestRun is called immediately on the new route sink.
There is no a-priori defined lookup order for routes: if they are ambiguous the behaviour is undefined. Only a single route is chosen for any event.
Add a rule to route events to sink when they match a given policy.
Parameters: |
|
---|---|
Raises: | ValueError if the policy is unknown |
Raises: | TypeError if the policy is given arguments it cannot handle. |
route_code_prefix routes events based on a prefix of the route code in the event. It takes a route_prefix argument to match on (e.g. ‘0’) and a consume_route argument, which, if True, removes the prefix from the route_code when forwarding events.
test_id routes events based on the test id. It takes a single argument, test_id. Use None to select non-test events.
A specialised StreamResult that summarises a stream.
The summary uses the same representation as the original unittest.TestResult contract, allowing it to be consumed by any test runner.
Return False if any failure has occured.
Note that incomplete tests can only be detected when stopTestRun is called, so that should be called before checking wasSuccessful.
Adds or discards tags from StreamResult events.
A specialised StreamResult that emits a callback as tests complete.
Top level file attachments are simply discarded. Hung tests are detected by stopTestRun and notified there and then.
The callback is passed a dict with the following keys:
- id: the test id.
- tags: The tags for the test. A set of unicode strings.
- details: A dict of file attachments - testtools.content.Content objects.
- status: One of the StreamResult status codes (including inprogress) or ‘unknown’ (used if only file events for a test were received...)
- timestamps: A pair of timestamps - the first one received with this test id, and the one in the event that triggered the notification. Hung tests have a None for the second end event. Timestamps are not compared - their ordering is purely order received in the stream.
Only the most recent tags observed in the stream are reported.
Convert StreamResult API calls into ExtendedTestResult calls.
This will buffer all calls for all concurrently active tests, and then flush each test as they complete.
Incomplete tests will be flushed as errors when the test run stops.
Non test file attachments are accumulated into a test called ‘testtools.extradata’ flushed at the end of the run.
A StreamResult which enqueues events as a dict to a queue.Queue.
Events have their route code updated to include the route code StreamToQueue was constructed with before they are submitted. If the event route code is None, it is replaced with the StreamToQueue route code, otherwise it is prefixed with the supplied code + a hyphen.
startTestRun and stopTestRun are forwarded to the queue. Implementors that dequeue events back into StreamResult calls should take care not to call startTestRun / stopTestRun on other StreamResult objects multiple times (e.g. by filtering startTestRun and stopTestRun).
StreamToQueue is typically used by ConcurrentStreamTestSuite, which creates one StreamToQueue per thread, forwards status events to the the StreamResult that ConcurrentStreamTestSuite.run() was called with, and uses the stopTestRun event to trigger calling join() on the each thread.
Unlike ThreadsafeForwardingResult which this supercedes, no buffering takes place - any event supplied to a StreamToQueue will be inserted into the queue immediately.
Events are forwarded as a dict with a key event which is one of startTestRun, stopTestRun or status. When event is status the dict also has keys matching the keyword arguments of StreamResult.status, otherwise it has one other key result which is the result that invoked startTestRun.
Adjust route_code on the way through.
Controls a running test run, allowing it to be interrupted.
Variables: | shouldStop – If True, tests should not run and should instead return immediately. Similarly a TestSuite should check this between each test and if set stop dispatching any new tests and return. |
---|
Indicate that tests should stop running.
A TestResult which ensures the target does not receive mixed up calls.
Multiple ThreadsafeForwardingResults can forward to the same target result, and that target result will only ever receive the complete set of events for one test at a time.
This is enforced using a semaphore, which further guarantees that tests will be sent atomically even if the ThreadsafeForwardingResults are in different threads.
ThreadsafeForwardingResult is typically used by ConcurrentTestSuite, which creates one ThreadsafeForwardingResult per thread, each of which wraps of the TestResult that ConcurrentTestSuite.run() is called with.
target.startTestRun() and target.stopTestRun() are called once for each ThreadsafeForwardingResult that forwards to the same target. If the target takes special action on these events, it should take care to accommodate this.
time() and tags() calls are batched to be adjacent to the test result and in the case of tags() are coerced into test-local scope, avoiding the opportunity for bugs around global state in the target.
See TestResult.
A StreamResult decorator that assigns a timestamp when none is present.
This is convenient for ensuring events are timestamped.
Attempt to import name. If it fails, return alternative.
When supporting multiple versions of Python or optional dependencies, it is useful to be able to try to import a module.
Parameters: |
|
---|
Attempt to import modules.
Tries to import the first module in module_names. If it can be imported, we return it. If not, we go on to the second module and try that. The process continues until we run out of modules to try. If none of the modules can be imported, either raise an exception or return the provided alternative value.
Parameters: |
|
---|---|
Raises ImportError: | |
If none of the modules can be imported and no alternative value was specified. |
All the matchers.
Matchers, a way to express complex assertions outside the testcase.
Inspired by ‘hamcrest’.
Matcher provides the abstract API that all matchers need to implement.
Bundled matchers are listed in __all__: a list can be obtained by running $ python -c ‘import testtools.matchers; print testtools.matchers.__all__’
Matches if the value matches after passing through a function.
This can be used to aid in creating trivial matchers as functions, for example:
def PathHasFileContent(content):
def _read(path):
return open(path).read()
return AfterPreprocessing(_read, Equals(content))
Matches if all provided values match the given matcher.
Annotates a matcher with a descriptive string.
Mismatches are then described as ‘<mismatch>: <annotation>’.
Annotate matcher only if annotation is non-empty.
Matches if any of the provided values match the given matcher.
Checks whether something is contained in another thing.
Make a matcher that checks whether a list of things is contained in another thing.
The matcher effectively checks that the provided sequence is a subset of the matchee.
Match a dictionary for which this is a super-dictionary.
Specify a dictionary mapping keys (often strings) to matchers. This is the ‘expected’ dict. Any dictionary that matches this must have only these keys, and the values must match the corresponding matchers in the expected dict. Dictionaries that have fewer keys can also match.
In other words, any matching dictionary must be contained by the dictionary given to the constructor.
Does not check for strict super-dictionary. That is, equal dictionaries match.
Match a dictionary for that contains a specified sub-dictionary.
Specify a dictionary mapping keys (often strings) to matchers. This is the ‘expected’ dict. Any dictionary that matches this must have at least these keys, and the values must match the corresponding matchers in the expected dict. Dictionaries that have more keys will also match.
In other words, any matching dictionary must contain the dictionary given to the constructor.
Does not check for strict sub-dictionary. That is, equal dictionaries match.
Matches if the given directory contains files with the given names.
That is, is the directory listing exactly equal to the given files?
Matches if the path exists and is a directory.
See if a string matches a doctest example.
Checks whether one string ends with another.
Matches if the items are equal.
eq(a, b) – Same as a==b.
Matches if the given file has the specified contents.
Matches if the given path exists and is a file.
Matches if the item is greater than the matchers reference object.
gt(a, b) – Same as a>b.
Matches if a file has the given permissions.
Permissions are specified and matched as a four-digit octal string.
Matches if the items are identical.
is_(a, b) – Same as a is b.
Matcher that wraps isinstance.
Checks whether a dict has particular keys.
Matches if the item is less than the matchers reference object.
lt(a, b) – Same as a<b.
Matches if all of the matchers it is created with match.
Matches if any of the matchers it is created with match.
Match a dictionary exactly, by its keys.
Specify a dictionary mapping keys (often strings) to matchers. This is the ‘expected’ dict. Any dictionary that matches this must have exactly the same keys, and the values must match the corresponding matchers in the expected dict.
Match an exc_info tuple against an exception instance or type.
Matches if each matcher matches the corresponding value.
More easily explained by example than in words:
>>> from ._basic import Equals
>>> MatchesListwise([Equals(1)]).match([1])
>>> MatchesListwise([Equals(1), Equals(2)]).match([1, 2])
>>> print (MatchesListwise([Equals(1), Equals(2)]).match([2, 1]).describe())
Differences: [
1 != 2
2 != 1
]
>>> matcher = MatchesListwise([Equals(1), Equals(2)], first_only=True)
>>> print (matcher.match([3, 4]).describe())
1 != 3
Match if a given function returns True.
It is reasonably common to want to make a very simple matcher based on a function that you already have that returns True or False given a single argument (i.e. a predicate function). This matcher makes it very easy to do so. e.g.:
IsEven = MatchesPredicate(lambda x: x % 2 == 0, '%s is not even')
self.assertThat(4, IsEven)
Match if a given parameterised function returns True.
It is reasonably common to want to make a very simple matcher based on a function that you already have that returns True or False given some arguments. This matcher makes it very easy to do so. e.g.:
HasLength = MatchesPredicate(
lambda x, y: len(x) == y, 'len({0}) is not {1}')
# This assertion will fail, as 'len([1, 2]) == 3' is False.
self.assertThat([1, 2], HasLength(3))
Note that unlike MatchesPredicate MatchesPredicateWithParams returns a factory which you then customise to use by constructing an actual matcher from it.
The predicate function should take the object to match as its first parameter. Any additional parameters supplied when constructing a matcher are supplied to the predicate as additional parameters when checking for a match.
Parameters: |
|
---|
Matches if the matchee is matched by a regular expression.
Matches if all the matchers match elements of the value being matched.
That is, each element in the ‘observed’ set must match exactly one matcher from the set of matchers, with no matchers left over.
The difference compared to MatchesListwise is that the order of the matchings does not matter.
Matcher that matches an object structurally.
‘Structurally’ here means that attributes of the object being matched are compared against given matchers.
fromExample allows the creation of a matcher from a prototype object and then modified versions can be created with update.
byEquality creates a matcher in much the same way as the constructor, except that the matcher for each of the attributes is assumed to be Equals.
byMatcher creates a similar matcher to byEquality, but you get to pick the matcher, rather than just using Equals.
Matches an object where the attributes equal the keyword values.
Similar to the constructor, except that the matcher is assumed to be Equals.
Matches an object where the attributes match the keyword values.
Similar to the constructor, except that the provided matcher is used to match all of the values.
Matches if the items are not equal.
In most cases, this is equivalent to Not(Equals(foo)). The difference only matters when testing __ne__ implementations.
ne(a, b) – Same as a!=b.
Inverts a matcher.
Matches if the given path exists.
Use like this:
assertThat('/some/path', PathExists())
Match if the matchee raises an exception when called.
Exceptions which are not subclasses of Exception propogate out of the Raises.match call unless they are explicitly matched.
Make a matcher that checks that a callable raises an exception.
This is a convenience function, exactly equivalent to:
return Raises(MatchesException(exception))
See Raises and MatchesException for more information.
Matches if two paths are the same.
That is, the paths are equal, or they point to the same file but in different ways. The paths do not have to exist.
Checks whether one string starts with another.
Matches if the given tarball contains the given paths.
Uses TarFile.getnames() to get the paths out of the tarball.