Giter VIP home page Giter VIP logo

testtools's Introduction

testtools

testtools is a set of extensions to the Python standard library's unit testing framework.

These extensions have been derived from years of experience with unit testing in Python and come from many different sources.

Documentation

If you would like to learn more about testtools, consult our documentation in the 'doc/' directory. You might like to start at 'doc/overview.rst' or 'doc/for-test-authors.rst'.

To build HTML versions of the documentation, make sure you have sphinx installed and run make docs.

Licensing

This project is distributed under the MIT license and copyright is owned by Jonathan M. Lange and the testtools authors. See LICENSE for details.

Some code in 'testtools/run.py' is taken from Python's unittest module, and is copyright Steve Purcell and the Python Software Foundation, it is distributed under the same license as Python, see LICENSE for details.

Supported platforms

  • Python 3.7+ or PyPy3

If you would like to use testtools for earlier Pythons, consult the compatibility docs:

testtools probably works on all OSes that Python works on, but is most heavily tested on Linux and macOS.

Optional Dependencies

If you would like to use our Twisted support, then you will need the testtools[twisted] extra.

If you want to use fixtures then you can either install fixtures (e.g. from https://launchpad.net/python-fixtures or https://pypi.python.org/pypi/fixtures) or alternatively just make sure your fixture objects obey the same protocol.

Bug reports and patches

Please report bugs using Launchpad at <https://bugs.launchpad.net/testtools>. Patches should be submitted as GitHub pull requests, or mailed to the authors. See doc/hacking.rst for more details.

There's no mailing list for this project yet, however the testing-in-python mailing list may be a useful resource:

History

testtools used to be called 'pyunit3k'. The name was changed to avoid conflating the library with the Python 3.0 release (commonly referred to as 'py3k').

Thanks

  • Canonical Ltd
  • Bazaar
  • Twisted Matrix Labs
  • Robert Collins
  • Andrew Bennetts
  • Benjamin Peterson
  • Jamu Kakar
  • James Westby
  • Martin [gz]
  • Michael Hudson-Doyle
  • Aaron Bentley
  • Christian Kampka
  • Gavin Panella
  • Martin Pool
  • Julia Varlamova
  • ClusterHQ Ltd
  • Tristan Seligmann
  • Jonathan Jacobs
  • Jelmer Vernooij
  • Hugo van Kemenade
  • Zane Bitter

testtools's People

Contributors

abentley avatar allenap avatar benjaminp avatar bigjools avatar bz2 avatar cclauss avatar cjwatson avatar come-maiz avatar dependabot[bot] avatar emonty avatar freeekanayaka avatar gone avatar hroncok avatar hugovk avatar jelmer avatar jml avatar mhuin avatar michel-slm avatar mtreinish avatar ncopa avatar oddbloke avatar pvinci avatar rbtcollins avatar robotfuel avatar scop avatar statik avatar stephenfin avatar thomir avatar xiaohanyu avatar zaneb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

testtools's Issues

Performance regression with testtools 1.9

I realized recently that testr got super slow.

On my repository:

time testr last
Ran 5826 tests in 483.814s
PASSED (id=0, skips=5)

real 1m41.814s
user 1m41.197s
sys 0m0.082s

I tried a bunch of things, and finally:

pip install testtools==1.8.0

time testr last
Ran 5826 tests in 483.814s
PASSED (id=0, skips=5)

real 0m20.736s
user 0m20.520s
sys 0m0.051s

So there is a giant performance regression here. I'll try to track it down, but I wanted to file the issue first.

Hard to tell what went wrong when a text content object is broken

I see lots of stack traces such as this one:

Traceback

Traceback (most recent call last):
  File "/usr/bin/autopilot3", line 9, in <module>
    load_entry_point('autopilot==1.5.0', 'console_scripts', 'autopilot3')()
  File "/usr/lib/python3/dist-packages/autopilot/run.py", line 729, in main
    test_app.run()
  File "/usr/lib/python3/dist-packages/autopilot/run.py", line 617, in run
    action()
  File "/usr/lib/python3/dist-packages/autopilot/run.py", line 681, in run_tests
    test_result = test_suite.run(result)
  File "/usr/lib/python3.4/unittest/suite.py", line 125, in run
    test(result)
  File "/usr/lib/python3.4/unittest/case.py", line 625, in __call__
    return self.run(*args, **kwds)
  File "/usr/lib/python3/dist-packages/testscenarios/testcase.py", line 62, in run
    test.run(result)
  File "/usr/lib/python3/dist-packages/testscenarios/testcase.py", line 65, in run
    return super(WithScenarios, self).run(result)
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 585, in run
    return self.__RunTest(self, self.exception_handlers).run(result)
  File "/usr/lib/python3/dist-packages/testtools/runtest.py", line 74, in run
    return self._run_one(actual_result)
  File "/usr/lib/python3/dist-packages/testtools/runtest.py", line 88, in _run_one
    return self._run_prepared_result(ExtendedToOriginalDecorator(result))
  File "/usr/lib/python3/dist-packages/testtools/runtest.py", line 107, in _run_prepared_result
    handler(self.case, self.result, e)
  File "/usr/lib/python3/dist-packages/testtools/testcase.py", line 558, in _report_failure
    result.addFailure(self, details=self.getDetails())
  File "/usr/lib/python3/dist-packages/testtools/testresult/real.py", line 1149, in addFailure
    return self.decorated.addFailure(test, err)
  File "/usr/lib/python3/dist-packages/autopilot/testresult.py", line 78, in addFailure
    return super(type(self), self).addFailure(test, err, details)
  File "/usr/lib/python3/dist-packages/testtools/testresult/real.py", line 1546, in addFailure
    return self.decorated.addFailure(test, err, details=details)
  File "/usr/lib/python3/dist-packages/testtools/testresult/real.py", line 1322, in addError
    self._convert(test, err, details, 'fail')
  File "/usr/lib/python3/dist-packages/testtools/testresult/real.py", line 1350, in _convert
    test_tags=self.current_tags, timestamp=now)
  File "/usr/lib/python3/dist-packages/testtools/testresult/real.py", line 408, in status
    super(CopyStreamResult, self).status(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/testtools/testresult/real.py", line 649, in status
    self.on_test(self._inprogress.pop(key))
  File "/usr/lib/python3/dist-packages/testtools/testresult/real.py", line 742, in _gather_test
    self._handle_status[test_dict['status']](case)
  File "/usr/lib/python3/dist-packages/testtools/testresult/real.py", line 761, in _fail
    message = _details_to_str(case._details, special="traceback")
  File "/usr/lib/python3/dist-packages/testtools/testresult/real.py", line 1752, in _details_to_str
    text = content.as_text().strip()
  File "/usr/lib/python3/dist-packages/testtools/content.py", line 94, in as_text
    return _u('').join(self.iter_text())
  File "/usr/lib/python3/dist-packages/testtools/content.py", line 118, in _iter_text
    yield decoder.decode(bytes)
  File "/usr/lib/python3.4/encodings/latin_1.py", line 26, in decode
    return codecs.latin_1_decode(input,self.errors)[0]
TypeError: 'NoneType' does not support the buffer interface

... where testtools attempts to convert a content object to a string, only to find that even though the content type of the content object is 'text', the content object returns something other than text.

This has been partially fixed in a separate PR that makes the text_content function more picky about what it accepts, but I think we should also catch the raised exception, and re-raise with the name of the content object being processed.

Automated lint checking

Recent discussions in #157 have highlighted the desirability of an automated lint checker with a configuration that's specialized for testtools' idiosyncratic needs.

This would be:

  • part of the test suite
  • configuration that developers could use for local & in-editor checks

If existing code doesn't meet the agreed-upon standard (and isn't that a discussion I'm looking forward to), then we can either ratchet the checks (i.e. no new lint) or have a flag day migration to the standard (there isn't that much WIP, or that much code).

Usage not easy

Can I use like phpunit ?

usage of phpunit on terminal:
phpunit abc.php

assertRaises cannot be used as a context manager

Python's unittest.TestCase.assertRaises can be used as a context manager as of Python 2.7. This is nice because it makes the code much easier to read when the call being made in even moderately complicated.

The workaround is to make a new callable that wraps the complicated call, but a context manager would be preferable.

If there is no objection to this feature then I'd be happy to implement it.

Failed tests with 2.0.0 release

I am getting the following failed tests with Python 2.7.11, that are not present on the previous 1.9.0 release.

Usage: prog discover [options]

prog discover: error: no such option: -l
Usage: prog discover [options]

prog discover: error: no such option: -l
Tests running...
option -l not recognized
Usage: prog [options] [test] [...]

Options:
  -h, --help       Show this message
  -v, --verbose    Verbose output
  -q, --quiet      Minimal output
  -f, --failfast   Stop on first failure
  -c, --catch      Catch control-C and display results
  -b, --buffer     Buffer stdout and stderr during test runs

Examples:
  prog                               - run default set of tests
  prog MyTestSuite                   - run suite 'MyTestSuite'
  prog MyTestCase.testSomething      - run MyTestCase.testSomething
  prog MyTestCase                    - run all 'test*' test methods
                                               in MyTestCase

option -l not recognized
Usage: prog [options] [test] [...]

Options:
  -h, --help       Show this message
  -v, --verbose    Verbose output
  -q, --quiet      Minimal output
  -f, --failfast   Stop on first failure
  -c, --catch      Catch control-C and display results
  -b, --buffer     Buffer stdout and stderr during test runs

Examples:
  prog                               - run default set of tests
  prog MyTestSuite                   - run suite 'MyTestSuite'
  prog MyTestCase.testSomething      - run MyTestCase.testSomething
  prog MyTestCase                    - run all 'test*' test methods
                                               in MyTestCase

option -l not recognized
Usage: prog [options] [test] [...]

Options:
  -h, --help       Show this message
  -v, --verbose    Verbose output
  -q, --quiet      Minimal output
  -f, --failfast   Stop on first failure
  -c, --catch      Catch control-C and display results
  -b, --buffer     Buffer stdout and stderr during test runs

Examples:
  prog                               - run default set of tests
  prog MyTestSuite                   - run suite 'MyTestSuite'
  prog MyTestCase.testSomething      - run MyTestCase.testSomething
  prog MyTestCase                    - run all 'test*' test methods
                                               in MyTestCase

option -l not recognized
Usage: prog [options] [test] [...]

Options:
  -h, --help       Show this message
  -v, --verbose    Verbose output
  -q, --quiet      Minimal output
  -f, --failfast   Stop on first failure
  -c, --catch      Catch control-C and display results
  -b, --buffer     Buffer stdout and stderr during test runs

Examples:
  prog                               - run default set of tests
  prog MyTestSuite                   - run suite 'MyTestSuite'
  prog MyTestCase.testSomething      - run MyTestCase.testSomething
  prog MyTestCase                    - run all 'test*' test methods
                                               in MyTestCase

option -l not recognized
Usage: prog [options] [test] [...]

Options:
  -h, --help       Show this message
  -v, --verbose    Verbose output
  -q, --quiet      Minimal output
  -f, --failfast   Stop on first failure
  -c, --catch      Catch control-C and display results
  -b, --buffer     Buffer stdout and stderr during test runs

Examples:
  prog                               - run default set of tests
  prog MyTestSuite                   - run suite 'MyTestSuite'
  prog MyTestCase.testSomething      - run MyTestCase.testSomething
  prog MyTestCase                    - run all 'test*' test methods
                                               in MyTestCase

======================================================================
ERROR: testtools.tests.test_distutilscmd.TestCommandTest.test_test_module
----------------------------------------------------------------------
stdout: {{{
Tests running...

Ran 2 tests in 0.001s
OK
}}}

Traceback (most recent call last):
  File "testtools/tests/test_distutilscmd.py", line 66, in test_test_module
    dist.run_command('test')
  File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
    cmd_obj.run()
  File "testtools/distutilscmd.py", line 62, in run
    exit=False)
  File "testtools/run.py", line 198, in __init__
    del self.testLoader.errors[:]
AttributeError: 'TestLoader' object has no attribute 'errors'
======================================================================
ERROR: testtools.tests.test_distutilscmd.TestCommandTest.test_test_suite
----------------------------------------------------------------------
stdout: {{{
Tests running...

Ran 2 tests in 0.001s
OK
}}}

Traceback (most recent call last):
  File "testtools/tests/test_distutilscmd.py", line 88, in test_test_suite
    dist.run_command('test')
  File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
    cmd_obj.run()
  File "testtools/distutilscmd.py", line 62, in run
    exit=False)
  File "testtools/run.py", line 198, in __init__
    del self.testLoader.errors[:]
AttributeError: 'TestLoader' object has no attribute 'errors'
======================================================================
ERROR: testtools.tests.test_run.TestRun.test_issue_16662
----------------------------------------------------------------------
traceback-1: {{{
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/fixtures/fixture.py", line 125, in cleanUp
    return self._cleanups(raise_errors=raise_first)
  File "/usr/lib/python2.7/site-packages/fixtures/callmany.py", line 88, in __call__
    reraise(error[0], error[1], error[2])
  File "/usr/lib/python2.7/site-packages/fixtures/callmany.py", line 82, in __call__
    cleanup(*args, **kwargs)
ValueError: list.remove(x): x not in list
}}}

Traceback (most recent call last):
  File "testtools/tests/test_run.py", line 350, in test_issue_16662
    ['prog', 'discover', '-l', pkg.package.base], out))
  File "testtools/run.py", line 264, in main
    stdout=stdout)
  File "testtools/run.py", line 171, in __init__
    self.parseArgs(argv)
  File "/usr/lib/python2.7/unittest/main.py", line 113, in parseArgs
    self._do_discovery(argv[2:])
  File "testtools/run.py", line 211, in _do_discovery
    super(TestProgram, self)._do_discovery(argv, Loader=Loader)
  File "/usr/lib/python2.7/unittest/main.py", line 190, in _do_discovery
    options, args = parser.parse_args(argv)
  File "/usr/lib/python2.7/optparse.py", line 1402, in parse_args
    self.error(str(err))
  File "/usr/lib/python2.7/optparse.py", line 1584, in error
    self.exit(2, "%s: error: %s\n" % (self.get_prog_name(), msg))
  File "/usr/lib/python2.7/optparse.py", line 1574, in exit
    sys.exit(status)
SystemExit: 2
======================================================================
FAIL: testtools.tests.test_run.TestRun.test_run_custom_list
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_run.py", line 157, in test_run_custom_list
    raise AssertionError("-l tried to exit. %r" % exc_info[1])
AssertionError: -l tried to exit. SystemExit(2,)
======================================================================
FAIL: testtools.tests.test_run.TestRun.test_run_list
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_run.py", line 189, in test_run_list
    raise AssertionError("-l tried to exit. %r" % exc_info[1])
AssertionError: -l tried to exit. SystemExit(2,)
======================================================================
FAIL: testtools.tests.test_run.TestRun.test_run_list_failed_import
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_run.py", line 216, in test_run_list_failed_import
    """, doctest.ELLIPSIS))
  File "testtools/testcase.py", line 447, in assertThat
    raise mismatch_error
testtools.matchers._impl.MismatchError: Expected:
    unittest.loader._FailedTest.runexample
    Failed to import test module: runexample
    Traceback (most recent call last):
      File ".../loader.py", line ..., in _find_test_path
        package = self._get_module_from_name(name)
      File ".../loader.py", line ..., in _get_module_from_name
        __import__(name)
      File ".../runexample/__init__.py", line 1
        class not in
    ...^...
    SyntaxError: invalid syntax

Got:
    <BLANKLINE>
======================================================================
FAIL: testtools.tests.test_run.TestRun.test_run_list_with_loader
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_run.py", line 177, in test_run_list_with_loader
    raise AssertionError("-l tried to exit. %r" % exc_info[1])
AssertionError: -l tried to exit. SystemExit(2,)
======================================================================
FAIL: testtools.tests.test_run.TestRun.test_run_load_list
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_run.py", line 266, in test_run_load_list
    "-l --load-list tried to exit. %r" % exc_info[1])
AssertionError: -l --load-list tried to exit. SystemExit(2,)
======================================================================
FAIL: testtools.tests.test_run.TestRun.test_run_orders_tests
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_run.py", line 240, in test_run_orders_tests
    "-l --load-list tried to exit. %r" % exc_info[1])
AssertionError: -l --load-list tried to exit. SystemExit(2,)

Ran 2525 tests in 0.618s
FAILED (failures=9)

testtools 1.8.0: compile failure and test failures

I've two problems which are kind of related so let me try to explain. testtools-0.5.0 compiles, installs and works fine. However when trying to compile 1.8.0 I run into the following error:

*** Error compiling '/var/tmp/paludis/build/dev-python-testtools-1.8.0/image//usr/x86_64-pc-linux-gnu/lib/python3.4/site-packages/testtools/_compat2x.py'...
File "/usr/x86_64-pc-linux-gnu/lib/python3.4/site-packages/testtools/_compat2x.py", line 16
raise exc_class, exc_obj, exc_tb
^
SyntaxError: invalid syntax

which is also mentioned at https://bugs.launchpad.net/testtools/+bug/953371.

I "solved" this by removing testtools/_compat2x.py for python3 targets (3.3 & 3.4) since it seems testtools-0.5.0 only installed that file for python2 targets as well.

But then other packages using testtools start failing their tests complaining about not being able to find _compat2x.py when running the tests for python3 targets, for example testscenarios:

[...]

ERROR: testscenarios.testcase (unittest.loader.ModuleImportFailure)

Traceback (most recent call last):
File "/usr/lib/python3.3/unittest/case.py", line 384, in _executeTestPart
function()
File "/usr/lib/python3.3/unittest/loader.py", line 32, in testFailure
raise exception
ImportError: Failed to import test module: testscenarios.testcase
Traceback (most recent call last):
File "/usr/lib/python3.3/unittest/loader.py", line 261, in _find_tests
module = self._get_module_from_name(name)
File "/usr/lib/python3.3/unittest/loader.py", line 239, in _get_module_from_name
import(name)
File "/var/tmp/paludis/build/dev-python-testscenarios-0.5.0/work/PYTHON_ABIS/3.3/testscenarios-0.5.0/testscenarios/init.py", line 58, in
from testscenarios.scenarios import (
File "/var/tmp/paludis/build/dev-python-testscenarios-0.5.0/work/PYTHON_ABIS/3.3/testscenarios-0.5.0/testscenarios/scenarios.py", line 33, in
from testtools.testcase import clone_test_with_new_id
File "/usr/lib/python3.3/site-packages/testtools/init.py", line 59, in
from testtools.matchers._impl import (
File "/usr/lib/python3.3/site-packages/testtools/matchers/init.py", line 58, in
from ._basic import (
File "/usr/lib/python3.3/site-packages/testtools/matchers/_basic.py", line 21, in
from ..compat import (
File "/usr/lib/python3.3/site-packages/testtools/compat.py", line 36, in
from testtools import _compat2x as _compat
ImportError: cannot import name _compat2x
[...]

Any help would be appreciated, for now I've reverted testtools back to 1.5.0 in our distribution which is of course far from any proper solution.

multiple test failures in version 1.8.0

======================================================================
FAIL: testtools.tests.test_run.TestRun.test_run_list_failed_import
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_run.py", line 215, in test_run_list_failed_import
    """, doctest.ELLIPSIS))
  File "testtools/testcase.py", line 435, in assertThat
    raise mismatch_error
testtools.matchers._impl.MismatchError: Expected:
    Failed to import test module: runexample
    Traceback (most recent call last):
      File ".../loader.py", line ..., in _find_test_path
        package = self._get_module_from_name(name)
      File ".../loader.py", line ..., in _get_module_from_name
        __import__(name)
      File ".../runexample/__init__.py", line 1
        class not in
    ...^...
    SyntaxError: invalid syntax

Got:
    unittest2.loader._FailedTest.runexample
    Failed to import test module: runexample
    Traceback (most recent call last):
      File "/usr/lib64/python2.7/site-packages/unittest2/loader.py", line 490, in _find_test_path
        package = self._get_module_from_name(name)
      File "/usr/lib64/python2.7/site-packages/unittest2/loader.py", line 395, in _get_module_from_name
        __import__(name)
      File "/var/tmp/portage/dev-python/testtools-1.8.0/temp/tmpE53tLU/runexample/__init__.py", line 1
        class not in
                ^
    SyntaxError: invalid syntax
    <BLANKLINE>

Ran 1196 tests in 0.197s

with all python version 2.7 - 3.5

TypeError: _StringException expects unicode

I'm getting the following traceback.

There's quite a few moving pieces to get this error:
Oracle Enterprise Linux 6u3, python 2.6, testtools 0.9.35.
I'm using Intellij IDEA 13.0.2 on OS X and using remote python and the nosetests runner.

The remote python is executed via ssh with the following command, and noserunner.py is somehow related to TeamcityPlugin framework(?).

Based on my interpretation of the stack trace I think this error is coming from testtools. I can create a bug on IDEA if you interpret this traceback differently.

my current work around is to use "unittest" runner instead of nosetests.

/usr/bin/python -u /home/ludeman/.pycharm_helpers/pycharm/noserunner.py /Volumes/box/gearbox/curator/smoke/

File "/usr/local/csi/lib/python2.6/site-packages/testtools/testresult/real.py", line 1706, in __init__
    (string,))
TypeError: _StringException expects unicode, got 'Traceback (most recent call last):\n  File "/Volumes/box/gearbox/curator/smoke/test_curator_process_v1.py", line 375, in test_post_curator_process_instance_v1\n    self.recursive_status_check_messages(create_results["status_uri"])\n  File "/Volumes/box/gearbox/curator/smoke/test_curator_process_v1.py", line 316, in recursive_status_check_messages\n    self.recursive_status_check_messages(child)\n  File "/Volumes/box/gearbox/curator/smoke/test_curator_process_v1.py", line 318, in recursive_status_check_messages\n    json.loads(recurse_check_request.text)\nAttributeError: \'NoneType\' object has no attribute \'text\'\n\n-------------------- >> begin captured logging << --------------------\nurllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1\nurllib3.connectionpool: DEBUG: "POST /curator/v1/process HTTP/1.1" 202 332\nurllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1\nurllib3.connectionpool: DEBUG: "GET /curator/v1/status/s-7hkb86dn2exnmp3zvt7xe2kk0b HTTP/1.1" 200 332\nurllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1\nurllib3.connectionpool: DEBUG: "GET /curator/v1/status/s-7hkb86dn2exnmp3zvt7xe2kk0b HTTP/1.1" 200 345\nurllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1\nurllib3.connectionpool: DEBUG: "POST /curator/v1/process-instance HTTP/1.1" 202 341\nurllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1\nurllib3.connectionpool: DEBUG: "GET /curator/v1/status/s-4d9ke2e6cwadwyxc9k0kf9jhra HTTP/1.1" 200 411\nurllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1\nurllib3.connectionpool: DEBUG: "GET /curator/v1/status/s-19027wmqtbnmgbb5g2xjd91g8f HTTP/1.1" 200 453\nurllib3.connectionpool: INFO: Starting new HTTP connection (1): 127.0.0.1\nurllib3.connectionpool: DEBUG: "GET /curator/v1/status/s-c7srd3fsz36c6x2b92vb9hxgw7 HTTP/1.1" 200 393\n--------------------- >> end captured logging << ---------------------'

Nearly matcher for floats

This would basically be for matching on floats when precision isn't controlled, although theoretically it'll work on any type that implements subtraction and abs():

class Nearly(object):
    """Within a certain threshold."""
    def __init__(self, expected, epsilon=0.001):
        self.expected = expected
        self.epsilon = epsilon

    def __str__(self):
        return 'Nearly(%r, %r)' % (self.expected, self.epsilon)

    def match(self, value):
        if abs(value - self.expected) > self.epsilon:
            return Mismatch(
                u'%r more than %r from %r' % (
                    value, self.epsilon, self.expected))

deferredGenerator is deprecated

testtools/deferredruntest.py:306: DeprecationWarning: twisted.internet.defer.deferredGenerator was deprecated in Twisted 15.0.0; please use twisted.internet.defer.inlineCallbacks instead

Should be an easy enough change, I can't provide a patch at the moment so I'm just filing this so as not to forget.

SuccessResultOf matcher for synchronous Deferred testing

I wrote this, based on SynchronousTestCase.successResultOf from Twisted:

class SuccessResultOf(object):
    """
    Match if a Deferred has fired with a result.
    """
    def __init__(self, resultMatcher=None):
        self.resultMatcher = resultMatcher


    def __str__(self):
        return 'SuccessResultOf(%s)' % (self.resultMatcher or '')


    def match(self, matchee):
        result = []
        matchee.addBoth(result.append)
        if not result:
            return Mismatch(
                'Success result expected on %r, found no result instead' % (
                    matchee,))
        elif isinstance(result[0], Failure):
            return Mismatch(
                'Success result expected on %r,'
                ' found failure result instead:\n%s' % (
                    matchee, result[0].getTraceback()))
        elif self.resultMatcher:
            return self.resultMatcher.match(result[0])
        return None

Presumably it would make sense to add FailureResultOf and NoResult matchers as well; the implementation of these would be a trivial permutation of the SuccessResultOf logic.

If higher-order combinator

In boolean logic: (p → q) ⇒ ¬p ∨ q

Therefore we can define:

def If(p, q):
    return MatchesAny(Not(p), q)

I didn't supply a patch yet because I thought kicking the idea around a bit might draw out some useful tweaks to the API; for example, what about the common case of If(MatchesPredicate(..., "message that will never be seen"), ...)?

Tuple can't be used as a string with lower

(py27)[harlowja@divedeprive taskflow]$ testr list-tests
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1}
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1}
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160}
${PYTHON:-python} -m subunit.run discover -t ./ ./taskflow/tests --list
Traceback (most recent call last):
File "/usr/local/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/usr/local/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/harlowja/Dev/taskflow/.tox/py27/lib/python2.7/site-packages/subunit/run.py", line 145, in
main()
File "/home/harlowja/Dev/taskflow/.tox/py27/lib/python2.7/site-packages/subunit/run.py", line 141, in main
stdout=stdout, exit=False)
File "/home/harlowja/Dev/taskflow/.tox/py27/lib/python2.7/site-packages/testtools/run.py", line 218, in init
self.parseArgs(argv)
File "/home/harlowja/Dev/taskflow/.tox/py27/lib/python2.7/site-packages/testtools/run.py", line 257, in parseArgs
self._do_discovery(argv[2:])
File "/home/harlowja/Dev/taskflow/.tox/py27/lib/python2.7/site-packages/testtools/run.py", line 376, in _do_discovery
loaded = loader.discover(start_dir, pattern, top_level_dir)
File "/usr/local/lib/python2.7/unittest/loader.py", line 206, in discover
tests = list(self._find_tests(start_dir, pattern))
File "/home/harlowja/Dev/taskflow/.tox/py27/lib/python2.7/site-packages/testtools/run.py", line 482, in _find_tests
if realpath.lower() != fullpath_noext.lower():
AttributeError: 'tuple' object has no attribute 'lower'
Non-zero exit code (1) from test listing.

Possibly splitting matchers into its own package?

I've found myself using matchers a lot for non-testing related purposes. It's a very clear way of describing an object, and seems more so than writing your own callable.

It also seems particularly useful as an easy way to write validators for expected JSON input, etc., if you are using it for something small and you don't want to write a full JSON schema for instance.

It might also be useful for something like the attrs library, for specifying custom validators.

The mismatch reporting is also pretty great.

It seems like a pretty useful tool for people to use, but it might get more visibility if detached from testtools. Right now the documentation is part of the testtools documentation, and people might not learn about it unless they're looking for a testing utility.

As it seems like it can stand on its own, maybe splitting it off into its own package and documentation/example usages outside of testing might encourage more usage and contributions?

Test suite fails for Python 2.7.10

make PYTHON=python2.7 check fails for me with the current git head and Python 2.7.10. It passes for me with python3.4 and python3.5. All are using the Ubuntu Python packages.

% make PYTHON=python2.7 check
PYTHONPATH=/home/barry/projects/debian/testtools/upstream python2.7 -m testtools.run testtools.tests.test_suite
Tests running...
Unhandled error in Deferred:


Traceback (most recent call last):
  File "testtools/runtest.py", line 108, in _run_prepared_result
    self._run_core()
  File "testtools/runtest.py", line 144, in _run_core
    self.case._run_test_method, self.result):
  File "testtools/runtest.py", line 191, in _run_user
    return fn(*args, **kwargs)
  File "testtools/testcase.py", line 654, in _run_test_method
    return self._get_test_method()()
--- <exception caught here> ---
  File "testtools/tests/test_deferredruntest.py", line 792, in test_assert_fails_with_expected_exception
    1/0
exceptions.ZeroDivisionError: integer division or modulo by zero
Unhandled error in Deferred:


Traceback (most recent call last):
  File "testtools/runtest.py", line 144, in _run_core
    self.case._run_test_method, self.result):
  File "testtools/runtest.py", line 191, in _run_user
    return fn(*args, **kwargs)
  File "testtools/testcase.py", line 654, in _run_test_method
    return self._get_test_method()()
  File "testtools/tests/test_deferredruntest.py", line 773, in test_assert_fails_with_wrong_exception
    defer.maybeDeferred(lambda: 1/0), RuntimeError, KeyboardInterrupt)
--- <exception caught here> ---
  File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 150, in maybeDeferred
    result = f(*args, **kw)
  File "testtools/tests/test_deferredruntest.py", line 773, in <lambda>
    defer.maybeDeferred(lambda: 1/0), RuntimeError, KeyboardInterrupt)
exceptions.ZeroDivisionError: integer division or modulo by zero
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAssertFailsWith.test_assert_fails_with_expected_exception
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 795, in test_assert_fails_with_expected_exception
    d = assert_fails_with(defer.fail(f), ZeroDivisionError)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAssertFailsWith.test_assert_fails_with_success
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 743, in test_assert_fails_with_success
    d = assert_fails_with(defer.succeed(marker), RuntimeError)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAssertFailsWith.test_assert_fails_with_success_multiple_types
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 758, in test_assert_fails_with_success_multiple_types
    defer.succeed(marker), RuntimeError, ZeroDivisionError)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAssertFailsWith.test_assert_fails_with_wrong_exception
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 773, in test_assert_fails_with_wrong_exception
    defer.maybeDeferred(lambda: 1/0), RuntimeError, KeyboardInterrupt)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAssertFailsWith.test_custom_failure_exception
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 806, in test_custom_failure_exception
    failureException=CustomException)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_async_cleanups
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 368, in test_async_cleanups
    runner = self.make_runner(test, timeout)
  File "testtools/tests/test_deferredruntest.py", line 262, in make_runner
    test, test.exception_handlers, timeout=timeout)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_calls_setUp_test_tearDown_in_sequence
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 319, in test_calls_setUp_test_tearDown_in_sequence
    runner = self.make_runner(test, timeout)
  File "testtools/tests/test_deferredruntest.py", line 262, in make_runner
    test, test.exception_handlers, timeout=timeout)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_clean_reactor
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 381, in test_clean_reactor
    runner = self.make_runner(test, timeout)
  File "testtools/tests/test_deferredruntest.py", line 262, in make_runner
    test, test.exception_handlers, timeout=timeout)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_convenient_construction
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 523, in test_convenient_construction
    factory = AsynchronousDeferredRunTest.make_factory(reactor, timeout)
AttributeError: 'NoneType' object has no attribute 'make_factory'
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_convenient_construction_default_debugging
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 569, in test_convenient_construction_default_debugging
    factory = AsynchronousDeferredRunTest.make_factory(debug=True)
AttributeError: 'NoneType' object has no attribute 'make_factory'
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_convenient_construction_default_reactor
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 546, in test_convenient_construction_default_reactor
    factory = AsynchronousDeferredRunTest.make_factory(reactor=reactor)
AttributeError: 'NoneType' object has no attribute 'make_factory'
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_convenient_construction_default_timeout
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 558, in test_convenient_construction_default_timeout
    factory = AsynchronousDeferredRunTest.make_factory(timeout=timeout)
AttributeError: 'NoneType' object has no attribute 'make_factory'
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_debugging_enabled_during_test_with_debug_flag
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 726, in test_debugging_enabled_during_test_with_debug_flag
    debug=True)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_debugging_unchanged_during_test_by_default
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 711, in test_debugging_unchanged_during_test_by_default
    reactor=self.make_reactor(), timeout=self.make_timeout())
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_deferred_error
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 580, in test_deferred_error
    runner = self.make_runner(test)
  File "testtools/tests/test_deferredruntest.py", line 262, in make_runner
    test, test.exception_handlers, timeout=timeout)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_exports_reactor
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 401, in test_exports_reactor
    runner = self.make_runner(test, timeout)
  File "testtools/tests/test_deferredruntest.py", line 262, in make_runner
    test, test.exception_handlers, timeout=timeout)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_fast_keyboard_interrupt_stops_test_run
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 492, in test_fast_keyboard_interrupt_stops_test_run
    runner = self.make_runner(test, timeout * 5)
  File "testtools/tests/test_deferredruntest.py", line 262, in make_runner
    test, test.exception_handlers, timeout=timeout)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_keyboard_interrupt_stops_test_run
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 473, in test_keyboard_interrupt_stops_test_run
    runner = self.make_runner(test, timeout * 5)
  File "testtools/tests/test_deferredruntest.py", line 262, in make_runner
    test, test.exception_handlers, timeout=timeout)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_log_err_flushed_is_success
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 669, in test_log_err_flushed_is_success
    runner = self.make_runner(test)
  File "testtools/tests/test_deferredruntest.py", line 262, in make_runner
    test, test.exception_handlers, timeout=timeout)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_log_err_is_error
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 638, in test_log_err_is_error
    runner = self.make_runner(test)
  File "testtools/tests/test_deferredruntest.py", line 262, in make_runner
    test, test.exception_handlers, timeout=timeout)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_log_in_details
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 690, in test_log_in_details
    runner = self.make_runner(test)
  File "testtools/tests/test_deferredruntest.py", line 262, in make_runner
    test, test.exception_handlers, timeout=timeout)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_only_addError_once
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 608, in test_only_addError_once
    runner = self.make_runner(test)
  File "testtools/tests/test_deferredruntest.py", line 262, in make_runner
    test, test.exception_handlers, timeout=timeout)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_setUp_returns_deferred_that_fires_later
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 286, in test_setUp_returns_deferred_that_fires_later
    runner = self.make_runner(test, timeout=timeout)
  File "testtools/tests/test_deferredruntest.py", line 262, in make_runner
    test, test.exception_handlers, timeout=timeout)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_timeout_causes_test_error
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 505, in test_timeout_causes_test_error
    runner = self.make_runner(test)
  File "testtools/tests/test_deferredruntest.py", line 262, in make_runner
    test, test.exception_handlers, timeout=timeout)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_unhandled_error_from_deferred
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 417, in test_unhandled_error_from_deferred
    runner = self.make_runner(test)
  File "testtools/tests/test_deferredruntest.py", line 262, in make_runner
    test, test.exception_handlers, timeout=timeout)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_unhandled_error_from_deferred_combined_with_error
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 444, in test_unhandled_error_from_deferred_combined_with_error
    runner = self.make_runner(test)
  File "testtools/tests/test_deferredruntest.py", line 262, in make_runner
    test, test.exception_handlers, timeout=timeout)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestAsynchronousDeferredRunTest.test_use_convenient_factory
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 532, in test_use_convenient_factory
    factory = AsynchronousDeferredRunTest.make_factory()
AttributeError: 'NoneType' object has no attribute 'make_factory'
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestRunWithLogObservers.test_restores_observers
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 819, in test_restores_observers
    from testtools.deferredruntest import run_with_log_observers
  File "testtools/deferredruntest.py", line 31, in <module>
    from twisted.trial.unittest import _LogObserver
ImportError: cannot import name _LogObserver
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestSynchronousDeferredRunTest.test_failure
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 222, in test_failure
    runner = self.make_runner(test)
  File "testtools/tests/test_deferredruntest.py", line 201, in make_runner
    return SynchronousDeferredRunTest(test, test.exception_handlers)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestSynchronousDeferredRunTest.test_setUp_followed_by_test
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 239, in test_setUp_followed_by_test
    runner = self.make_runner(test)
  File "testtools/tests/test_deferredruntest.py", line 201, in make_runner
    return SynchronousDeferredRunTest(test, test.exception_handlers)
TypeError: 'NoneType' object is not callable
======================================================================
ERROR: testtools.tests.test_deferredruntest.TestSynchronousDeferredRunTest.test_success
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_deferredruntest.py", line 208, in test_success
    runner = self.make_runner(test)
  File "testtools/tests/test_deferredruntest.py", line 201, in make_runner
    return SynchronousDeferredRunTest(test, test.exception_handlers)
TypeError: 'NoneType' object is not callable
======================================================================
FAIL: testtools.tests.test_spinner.TestRunInReactor.test_clean_delayed_call
----------------------------------------------------------------------
Traceback (most recent call last):
  File "testtools/tests/test_spinner.py", line 214, in test_clean_delayed_call
    self.assertThat(results, Equals([call]))
  File "testtools/testcase.py", line 435, in assertThat
    raise mismatch_error
testtools.matchers._impl.MismatchError: !=:
reference = [<twisted.internet.base.DelayedCall instance at 0x7ff6e2985b48>]
actual    = [<twisted.internet.base.DelayedCall instance at 0x7ff6de72da70>,
 <twisted.internet.base.DelayedCall instance at 0x7ff6de72dab8>,
 <twisted.internet.base.DelayedCall instance at 0x7ff6de72db00>,
 <twisted.internet.base.DelayedCall instance at 0x7ff6e2985b48>]

Ran 1196 tests in 0.960s
FAILED (failures=32)
Makefile:7: recipe for target 'check' failed
make: *** [check] Error 1

SystemExit prevents traceback and stops test execution

I got this behavior during a test run and was left without diagnostic output (i.e. traceback, stdout, stderr).
(there the SystemExit was raised by argparse due to a yet non-existing subcommand)

Running this code

# test.py
import unittest
import testtools

class Test_testtools(testtools.TestCase):
    def test_1(self):
        pass

    def test_2(self):
        raise SystemExit(0)

    def test_3(self):
        pass

if __name__ == '__main__':
    unittest.main()

produces the following output:

$ python test.py
.E$ 

or with verbose on:

$ python test.py -v
test_1 (__main__.Test_testtools)
__main__.Test_testtools.test_1 ... ok
test_2 (__main__.Test_testtools)
__main__.Test_testtools.test_2 ... ERROR
$ echo $?
0

Note, that the third test was not run at all and there was no traceback nor a FAILED notification.
The exit status was 0.

When I run the tests with testtools' runner I get this:

$ python -m testtools.run test.py
Tests running...
======================================================================
ERROR: test.Test_testtools.test_2
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test.py", line 10, in test_2
    raise SystemExit(0)
SystemExit: 0
Ran 2 tests in 0.001s
FAILED (failures=1)
$ echo $?
0

only two tests run, there is a traceback, but exit status is still 0!

assertEquals() has poor output for sets

testtools.assertEquals(set1, set2) will display both sets in full. However unittest.assertSetEquals() displays a nice diff. testtools.assertEquals() should either pass through to unittest.assertSetEquals(), or print its own nice diff.

odd setuptools error

While running 'python setup.py develop', I get a number of tracebacks, a 100% pegged CPU, and a MemoryError - fun!

Here's how to reproduce:

  1. Create a virtualenv for python 3.4.3 (I haven't tested other versions - they may work as well).

    virtualenv -p python3 ve3
    . ve3/bin/activate
    
  2. Install testtools in 'developer mode':

    python setup.py develop
    

When I run this, I get the following output:

running develop
running egg_info
writing requirements to testtools.egg-info/requires.txt
writing top-level names to testtools.egg-info/top_level.txt
writing dependency_links to testtools.egg-info/dependency_links.txt
writing testtools.egg-info/PKG-INFO
writing pbr to testtools.egg-info/pbr.json
[pbr] Processing SOURCES.txt
[pbr] In git context, generating filelist from git
warning: no files found matching 'AUTHORS'
warning: no files found matching 'ChangeLog'
warning: no previously-included files matching '*.pyc' found anywhere in distribution
reading manifest template 'MANIFEST.in'
no previously-included directories found matching 'doc/_build'
writing manifest file 'testtools.egg-info/SOURCES.txt'
running build_ext
Creating /home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/testtools.egg-link (link to .)
testtools 0.9.24.dev253 is already the active version in easy-install.pth

Installed /home/thomi/code/canonical/testtools/testtools
Processing dependencies for testtools==0.9.24.dev253
Searching for traceback2
Reading https://pypi.python.org/simple/traceback2/
Best match: traceback2 1.4.0
Downloading https://pypi.python.org/packages/source/t/traceback2/traceback2-1.4.0.tar.gz#md5=9e9723f4d70bfc6308fa992dd193c400
Processing traceback2-1.4.0.tar.gz
Writing /tmp/easy_install-48ra02iu/traceback2-1.4.0/setup.cfg
Running traceback2-1.4.0/setup.py -q bdist_egg --dist-dir /tmp/easy_install-48ra02iu/traceback2-1.4.0/egg-dist-tmp-9iq7dwv3
ERROR:root:Error parsing
Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/.eggs/pbr-1.8.0-py3.4.egg/pbr/core.py", line 109, in pbr
    attrs = util.cfg_to_args(path)
  File "/home/thomi/code/canonical/testtools/testtools/.eggs/pbr-1.8.0-py3.4.egg/pbr/util.py", line 245, in cfg_to_args
    kwargs = setup_cfg_to_setup_kwargs(config)
  File "/home/thomi/code/canonical/testtools/testtools/.eggs/pbr-1.8.0-py3.4.egg/pbr/util.py", line 379, in setup_cfg_to_setup_kwargs
    cmd = cls(dist)
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/__init__.py", line 124, in __init__
    _Command.__init__(self,dist)
  File "/usr/lib/python3.4/distutils/cmd.py", line 57, in __init__
    raise TypeError("dist must be a Distribution instance")
TypeError: dist must be a Distribution instance
Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/.eggs/pbr-1.8.0-py3.4.egg/pbr/core.py", line 109, in pbr
    attrs = util.cfg_to_args(path)
  File "/home/thomi/code/canonical/testtools/testtools/.eggs/pbr-1.8.0-py3.4.egg/pbr/util.py", line 245, in cfg_to_args
    kwargs = setup_cfg_to_setup_kwargs(config)
  File "/home/thomi/code/canonical/testtools/testtools/.eggs/pbr-1.8.0-py3.4.egg/pbr/util.py", line 379, in setup_cfg_to_setup_kwargs
    cmd = cls(dist)
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/__init__.py", line 124, in __init__
    _Command.__init__(self,dist)
  File "/usr/lib/python3.4/distutils/cmd.py", line 57, in __init__
    raise TypeError("dist must be a Distribution instance")
TypeError: dist must be a Distribution instance

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 152, in save_modules
    yield saved
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 193, in setup_context
    yield
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 237, in run_setup
    DirectorySandbox(setup_dir).run(runner)
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 267, in run
    return func()
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 236, in runner
    _execfile(setup_script, ns)
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 46, in _execfile
    exec(code, globals, locals)
_pickle.PicklingError: Can't pickle <class 'setuptools.sandbox.UnpickleableException'>: it's not the same object as setuptools.sandbox.UnpickleableException

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 108, in dump
    return pickle.dumps(type), pickle.dumps(exc)
_pickle.PicklingError: Can't pickle <class 'setuptools.sandbox.UnpickleableException'>: it's not the same object as setuptools.sandbox.UnpickleableException

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 108, in dump
    return pickle.dumps(type), pickle.dumps(exc)
_pickle.PicklingError: Can't pickle <class 'setuptools.sandbox.UnpickleableException'>: it's not the same object as setuptools.sandbox.UnpickleableException

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 108, in dump
    return pickle.dumps(type), pickle.dumps(exc)
_pickle.PicklingError: Can't pickle <class 'setuptools.sandbox.UnpickleableException'>: it's not the same object as setuptools.sandbox.UnpickleableException

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 108, in dump
    return pickle.dumps(type), pickle.dumps(exc)
_pickle.PicklingError: Can't pickle <class 'setuptools.sandbox.UnpickleableException'>: it's not the same object as setuptools.sandbox.UnpickleableException

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 108, in dump
    return pickle.dumps(type), pickle.dumps(exc)
_pickle.PicklingError: Can't pickle <class 'setuptools.sandbox.UnpickleableException'>: it's not the same object as setuptools.sandbox.UnpickleableException

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 108, in dump
    return pickle.dumps(type), pickle.dumps(exc)
_pickle.PicklingError: Can't pickle <class 'setuptools.sandbox.UnpickleableException'>: it's not the same object as setuptools.sandbox.UnpickleableException

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 108, in dump
    return pickle.dumps(type), pickle.dumps(exc)
_pickle.PicklingError: Can't pickle <class 'setuptools.sandbox.UnpickleableException'>: it's not the same object as setuptools.sandbox.UnpickleableException

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 108, in dump
    return pickle.dumps(type), pickle.dumps(exc)
_pickle.PicklingError: Can't pickle <class 'setuptools.sandbox.UnpickleableException'>: it's not the same object as setuptools.sandbox.UnpickleableException

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 108, in dump
    return pickle.dumps(type), pickle.dumps(exc)
_pickle.PicklingError: Can't pickle <class 'setuptools.sandbox.UnpickleableException'>: it's not the same object as setuptools.sandbox.UnpickleableException

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 108, in dump
    return pickle.dumps(type), pickle.dumps(exc)
_pickle.PicklingError: Can't pickle <class 'setuptools.sandbox.UnpickleableException'>: it's not the same object as setuptools.sandbox.UnpickleableException

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 108, in dump
    return pickle.dumps(type), pickle.dumps(exc)
_pickle.PicklingError: Can't pickle <class 'setuptools.sandbox.UnpickleableException'>: it's not the same object as setuptools.sandbox.UnpickleableException

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 108, in dump
    return pickle.dumps(type), pickle.dumps(exc)
_pickle.PicklingError: Can't pickle <class 'setuptools.sandbox.UnpickleableException'>: it's not the same object as setuptools.sandbox.UnpickleableException

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 108, in dump
    return pickle.dumps(type), pickle.dumps(exc)
_pickle.PicklingError: Can't pickle <class 'setuptools.sandbox.UnpickleableException'>: it's not the same object as setuptools.sandbox.UnpickleableException

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 108, in dump
    return pickle.dumps(type), pickle.dumps(exc)
_pickle.PicklingError: Can't pickle <class 'setuptools.sandbox.UnpickleableException'>: it's not the same object as setuptools.sandbox.UnpickleableException

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 108, in dump
    return pickle.dumps(type), pickle.dumps(exc)
_pickle.PicklingError: Can't pickle <class 'setuptools.sandbox.UnpickleableException'>: it's not the same object as setuptools.sandbox.UnpickleableException

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 108, in dump
    return pickle.dumps(type), pickle.dumps(exc)
_pickle.PicklingError: Can't pickle <class 'setuptools.sandbox.UnpickleableException'>: it's not the same object as setuptools.sandbox.UnpickleableException

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "setup.py", line 16, in <module>
    pbr=True)
  File "/usr/lib/python3.4/distutils/core.py", line 148, in setup
    dist.run_commands()
  File "/usr/lib/python3.4/distutils/dist.py", line 955, in run_commands
    self.run_command(cmd)
  File "/usr/lib/python3.4/distutils/dist.py", line 974, in run_command
    cmd_obj.run()
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
  File "/home/thomi/code/canonical/testtools/testtools/ve3/lib/python3.4/site-packages/setuptools/sandbox.py", line 110, in dump
    return cls.dump(cls, cls(repr(exc)))
MemoryError

The command takes about a minute to complete, and during that time python pegs a CPU core to 100%, and memory usage goes through the roof :(

If python setup.py develop isn't supported, we should make it exit immediately with an error. I can't see why we wouldn't support it though, so this is likely 'just a bug'.

Missing tags (1.8.0 not available)

Hi,

I'd like to link to the v1.8.0 NEWS file on Github, but discovered that that tag is not pushed yet. Would appreciate that - thanks!

AnyMatch isn't exported

The following line throws an ImportError because testtools/matchers/init.py doesn't mention AnyMatch in __all__:

from testtools.matchers import AnyMatch

The following is required instead:

from testtools.matchers._higherorder import AnyMatch

That's inconsistent with all the other matchers. This was introduced in revision b301640, where AnyMatch was first defined.

Example in docstring for `failed` is wrong

The example suggests that the matcher you pass in is matched against the exception value:

        error = RuntimeError('foo')
        fails_at_runtime = failed(Equals(error))
        deferred = defer.fail(error)
        assert_that(deferred, fails_at_runtime)

However, it is actually matched against the failure. You would need to do something like this instead:

        error = RuntimeError('foo')
        fails_at_runtime = failed(AfterPreprocessing(lambda f: f.value, Equals(error)))
        deferred = defer.fail(error)
        assert_that(deferred, fails_at_runtime)

This is somewhat of an uncomfortable idiom; maybe we need an extra helper here to ease things along?

`raises` matcher Mismatch text could be more helpful

I often write test code like so:

self.assertThat(
    lambda: client.add_key_for_owner(user_two, fingerprint),
    raises(GPGServiceException("Fingerprint already in the database.")))

However, if there's a bug in my code and the function in question raises a different exception, I get the following output:

MismatchError: <type 'exceptions.NameError'> is not a <class 'lp.services.gpg.interfaces.GPGServiceException'>

At this point I usually change the test to run the function outside of an assertion so I get a proper traceback that allows me to debug the problem. It would be nice if the raises matcher gave me more verbose information about what was raised - at a minimum str(exception).

Use six instead of compat

In #159 I suggested we drop Python 2.6. We agreed that was a bit risky for now. However, using six instead of testtools.compat is a great idea.

We should:

  • Add six as a dependency
  • Update code to use six directly instead of compat
  • Deprecate compat
  • Release
  • Delete compat

testtools.matchers.MatchesDict bug?

Hi,
I'm trying to use the MatchesDict matcher and I think it's broken (?).
To my understanding this matches has the same functionality as unittest.assertDictEqual.

Here's test snippet:

import testtools
from testtools.matchers import MatchesDict

class Test(testtools.TestCase):
    def test(self):
        self.assertThat({1:2}, MatchesDict({3:4}))
    def test2(self):
        self.assertThat({3:4}, MatchesDict({3:4}))

and that's what I get:

(testtools)tmp -> nosetests
FE
======================================================================
ERROR: test.Test.test2
----------------------------------------------------------------------
_StringException: Traceback (most recent call last):
  File "/private/tmp/test.py", line 8, in test2
    self.assertThat({3:4}, MatchesDict({3:4}))
  File "/Users/michael/.virtualenvs/testtools/lib/python2.7/site-packages/testtools/testcase.py", line 431, in assertThat
    mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
  File "/Users/michael/.virtualenvs/testtools/lib/python2.7/site-packages/testtools/testcase.py", line 481, in _matchHelper
    mismatch = matcher.match(matchee)
  File "/Users/michael/.virtualenvs/testtools/lib/python2.7/site-packages/testtools/matchers/_dict.py", line 165, in match
    return MatchesAllDict(matchers).match(observed)
  File "/Users/michael/.virtualenvs/testtools/lib/python2.7/site-packages/testtools/matchers/_dict.py", line 44, in match
    mismatches[label] = self.matchers[label].match(observed)
  File "/Users/michael/.virtualenvs/testtools/lib/python2.7/site-packages/testtools/matchers/_dict.py", line 102, in match
    mismatches = self._compare_dicts(self._matchers, observed)
  File "/Users/michael/.virtualenvs/testtools/lib/python2.7/site-packages/testtools/matchers/_dict.py", line 96, in _compare_dicts
    mismatch = expected[key].match(observed[key])
AttributeError: 'int' object has no attribute 'match'


======================================================================
FAIL: test.Test.test
----------------------------------------------------------------------
_StringException: Traceback (most recent call last):
  File "/private/tmp/test.py", line 6, in test
    self.assertThat({1:2}, MatchesDict({3:4}))
  File "/Users/michael/.virtualenvs/testtools/lib/python2.7/site-packages/testtools/testcase.py", line 433, in assertThat
    raise mismatch_error
MismatchError: Extra: {
  1: 2,
}
Missing: {
  3: 4,
}


----------------------------------------------------------------------
Ran 2 tests in 0.026s

FAILED (errors=1, failures=1)

Am I missing something regarding the usage of MatchesDict & matchers?

missing a way to use assertRaises as context manager

I think that it would be useful if we can use assertRaises as context manager the same way as in unittest.
That way allows to organize code better (block of code) and store raised exception to variable for further inspection.

self.skip() stopped working with 1.9.0

We had a test that did self.skip('something') which used to stop the test running and skip just fine with 1.8.1. With 1.9.0 this now raises an exception and reports as a failure.

Reversed order in testtools messages

When using an assert such as self.assertEquals(tester(), expected), an error message suggests wrong argument order (as in, not the order in examples in the documentation). Apparently this is confusing enough that OpenStack opened a family of bugs and filed patches to swap the order in the tests, see:
https://bugs.launchpad.net/ceilometer/+bug/1277104

If the code is:

import testtools

class TestConfigTrueValue(testtools.TestCase):
    def test_testEquals(self):
        reference = "reference-0123456789012345678901234567890123456789"
        def function_under_test():
            return "actual-0123456789012345678901234567890123456789"
        self.assertEquals(function_under_test(), reference)

Then running it yields:

[zaitcev@guren xxx]$ python3 -c 'import nose; nose.main()'
F
======================================================================
FAIL: testic.TestConfigTrueValue.test_testEquals
----------------------------------------------------------------------
testtools.testresult.real._StringException: Traceback (most recent call last):
  File "/q/zaitcev/tmp/xxx/testic.py", line 10, in test_testEquals
    self.assertEquals(function_under_test(), reference)
  File "/usr/lib/python3.3/site-packages/testtools/testcase.py", line 322, in assertEqual
    self.assertThat(observed, matcher, message)
  File "/usr/lib/python3.3/site-packages/testtools/testcase.py", line 417, in assertThat
    raise MismatchError(matchee, matcher, mismatch, verbose)
testtools.matchers._impl.MismatchError: !=:
reference = 'actual-0123456789012345678901234567890123456789'
actual    = 'reference-0123456789012345678901234567890123456789'

UnicodeDecodeError content.py text_content()

I've just switched my language to Esperanto, and started getting a testtools exception when a test failed. The exception was hiding the error, so I had to debug to understand what was going on.

This is the exception:
Traceback (most recent call last):
File "/usr/bin/autopilot", line 9, in
load_entry_point('autopilot==1.4.0', 'console_scripts', 'autopilot')()
File "/usr/lib/python2.7/dist-packages/autopilot/run.py", line 414, in main
test_app.run()
File "/usr/lib/python2.7/dist-packages/autopilot/run.py", line 254, in run
self.run_tests()
File "/usr/lib/python2.7/dist-packages/autopilot/run.py", line 367, in run_tests
test_result = test_suite.run(result)
File "/usr/lib/python2.7/unittest/suite.py", line 108, in run
test(result)
File "/usr/lib/python2.7/unittest/case.py", line 395, in call
return self.run(_args, *_kwds)
File "/usr/lib/python2.7/dist-packages/testscenarios/testcase.py", line 65, in run
return super(WithScenarios, self).run(result)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 568, in run
return self.RunTest(self, self.exception_handlers).run(result)
File "/usr/lib/python2.7/dist-packages/testtools/runtest.py", line 74, in run
return self._run_one(actual_result)
File "/usr/lib/python2.7/dist-packages/testtools/runtest.py", line 88, in _run_one
return self._run_prepared_result(ExtendedToOriginalDecorator(result))
File "/usr/lib/python2.7/dist-packages/testtools/runtest.py", line 107, in _run_prepared_result
handler(self.case, self.result, e)
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 533, in _report_error
result.addError(self, details=self.getDetails())
File "/usr/lib/python2.7/dist-packages/testtools/testresult/real.py", line 1120, in addError
return self.decorated.addError(test, details=details)
File "/usr/lib/python2.7/dist-packages/autopilot/testresult.py", line 61, in addError
self._log_details(logging.ERROR, test.getDetails())
File "/usr/lib/python2.7/dist-packages/testtools/testcase.py", line 234, in getDetails
return self._details
File "/usr/lib/python2.7/bdb.py", line 53, in trace_dispatch
return self.dispatch_return(frame, arg)
File "/usr/lib/python2.7/bdb.py", line 88, in dispatch_return
self.user_return(frame, arg)
File "/usr/lib/python2.7/pdb.py", line 190, in user_return
self.interaction(frame, None)
File "/usr/lib/python2.7/pdb.py", line 209, in interaction
self.print_stack_entry(self.stack[self.curindex])
File "/usr/lib/python2.7/pdb.py", line 900, in print_stack_entry
prompt_prefix)
File "/usr/lib/python2.7/bdb.py", line 381, in format_stack_entry
s = s + repr.repr(rv)
File "/usr/lib/python2.7/repr.py", line 24, in repr
return self.repr1(x, self.maxlevel)
File "/usr/lib/python2.7/repr.py", line 32, in repr1
return getattr(self, 'repr
' + typename)(x, level)
File "/usr/lib/python2.7/repr.py", line 85, in repr_dict
valrepr = repr1(x[key], newlevel)
File "/usr/lib/python2.7/repr.py", line 34, in repr1
s = __builtin
.repr(x)
File "/usr/lib/python2.7/dist-packages/testtools/content.py", line 124, in repr
self.content_type, _join_b(self.iter_bytes()))
File "/usr/lib/python2.7/dist-packages/testtools/content.py", line 97, in iter_bytes
return self._get_bytes()
File "/usr/lib/python2.7/dist-packages/testtools/content.py", line 266, in
return Content(UTF8_TEXT, lambda: [text.encode('utf8')])
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc4 in position 135: ordinal not in range(128)

This is the error message the test is trying to add as detail:

/usr/lib/python2.7/dist-packages/testtools/content.py(267)text_content()
-> return Content(UTF8_TEXT, lambda: [text.encode('utf8')])
(Pdb) text
'Testability driver loaded. Wire protocol version is "1.4".\nQInotifyFileSystemWatcherEngine::addPaths: inotify_add_watch failed: Ne havi\xc4\x9das plu da spaco sur aparato\nfile:///home/elopio/workspace/ubuntu/ubuntu-clock-app/reviews/nik90/transition-worldclock-u1db/ubuntu-clock-app.qml:24 module "U1db" is not installed\n\n'

Drop Python 2.6 support

I propose that:

  • the next release be the last release that supports Python 2.6
  • we replace testtools.compat with a dependency on six

Tries to pull non-existent deps on python3

The setup.py is declaring some setup and install deps now, including traceback2 and unittest2, which are backports of Python3 modules to Python2. This causes python3 setup.py install to fail.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.