Giter VIP home page Giter VIP logo

pytest-testmon's People

Contributors

acefire6 avatar adman avatar andriyor avatar avallbona avatar blueyed avatar boxed avatar fkaleo avatar hroncok avatar hugovk avatar janbernloehr avatar ktosiek avatar malcops avatar matlino avatar matmas avatar pidelport avatar public avatar smitp avatar sveitser avatar tarpas avatar tom-dalton-fanduel avatar voidus avatar webknjaz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytest-testmon's Issues

dogfooding

Maybe it's trivial, maybe not.
Let's set-up development environment for hacking on
testmon plug-in where you use a stable version as a test
runner but the code under test is a bleeding edge/edited version.

Or in other words 'import pytest_myplugin' has to pick up the stable
source code on pytest start-up, but all the tests (and their
subprocesses) doing 'import pytest_myplugin' have to pick-up the
bleading edge-version.

proper file change watching

the tmon.py is a primitive script put together for demo purposes. I would be very glad to put the re-running out of scope ot testmon.
to consider:

In short term, testmon will probably be failing with it's bugs so I will not want to run it automatically that much. Also in long term I quite struggle with console test outputs so I would probably run py.test manually or use PyCharm to run a test suite.

Go back to using pytest-cache, or at least manually store .testmondata in .cache/?

This is only a very minor annoyance, but would it be possible to put .testmondata in .cache/, either via the pytest-cache API (which is part of pytest-core since 2.8), or perhaps even by manually writing it there?

The idea is to avoid having to add .testmondata to .gitignore, instead sticking to .cache (it sounds a bit awkward to have to add all individual cache files of whatever plugins one may be using to .gitignore -- in fact not all devs of a project may be using the same plugin set).

pytest-xdist support

When combining testmon and xdist, more tests are rerun than necessary. I suspect the xdist runners don't update .testmondata.

Is testmon compatible with xdist or is this due to misconfiguration?

Interrupting tests will skip whole file on next run

When interrupting pytest-testmon, it seems to skip the whole file on the next run!

% pytest --testmon -p no:sugar
platform linux -- Python 3.5.2, pytest-3.0.5, py-1.4.32, pluggy-0.4.0
testmon=True, changed files: 0, skipping collection of 0 items, run variant: b2d229d2f7cfaa2ac4f3bc832d31950a
plugins: catchlog-1.2.2, asyncio-0.5.0, repeat-0.4.1, mock-1.5.0, cov-2.4.0, django-3.1.2, testmon-0.9.3, pylama-7.3.3, profiling-1.2.2
collected 973 items 

test_a.py ..^C

!!!! KeyboardInterrupt !!!!
to show a full traceback on KeyboardInterrupt use --fulltrace
…/src/django/django/db/models/sql/query.py:1086: KeyboardInterrupt
==== 2 passed in 6.57 seconds =====

% pytest --testmon -p no:sugar
platform linux -- Python 3.5.2, pytest-3.0.5, py-1.4.32, pluggy-0.4.0
testmon=True, changed files: 0, skipping collection of 3 items, run variant: b2d229d2f7cfaa2ac4f3bc832d31950a
collected 872 items 

test_b.py ....

The "skipping collection of 3 items" looks suspicious.
Are these the two successful tests + the file/module?!

compatibility with pytest 3.0.2

Hi,
I am new in Python and I am interested in implementing testmon in my circleci builds.
The problem is I need pytest 3.0.2 to run my tests, and when installing testmon it automatically uninstalls my pytest version to install pytest 2.9.2 which causes my circle builds to fail.
Any idea what can I do ?
Thanks in advance

set-up travis-ci

Travis-ci is allegedly the most mature.
I'm open to other providers (like drone.io), especially if they sponsor customization of testmon for their (or any CI) needs :).
The initial support should be for

  • python 2.7
  • python 3.newest
  • coverage<4.0
  • pytest<2.7
  • pytest>=2.7

Cannot install testmon without pandoc installed, version from apt doesn't work

You have setup_requires for setuptools-markdown, which requires pandoc to be installed on any machine setup.py runs on. This includes any machine that tries to install the package from pip.

As a result when I tried to install testmon from pip, first it failed because I didn't have pandoc installed, then it failed because something about the default pandoc install from apt doesn't work with this and complains about a missing format - I don't know whether it's because markdown isn't supported, rst isn't supported, or what. I don't really feel like debugging pandoc right now especially when I'm pretty sure it's going to require me to update my Haskell toolchain to get it working.

All of this seems very unnecessary and likely to be causing other people problems too. I admit RST is a bit of an annoying format, but avoiding it doesn't seem worth this hassle. Couldn't you either precompile your README to rst as part of the package build, or just write it in RST in the first place (github supports READMEs in RST natively, so that won't cause you a problem)?

Changing conftest reruns all tests

Even with a test file that does not use any fixtures (beside autouse ones), changes to conftest.py (e.g. adding a function, which is not a fixture) seem to retrigger all tests to be run.

pytest --fixtures-per-test shows that there are some autouse ones, but those are not changed:

…
django_db_blocker
…
django_test_environment
…

test_mon.py:

def test_testmon():
    assert 1

Is this expected?
I've found ccc4088 (#17), which seems to be related.
I've not tried it in a minimal project.

Can't install - unresolved packages

I can see that there is requirements.txt but it's not used in setup.py. Instead in setup.py is different set of dependency. Unfortunatelly I can't install it because of coverage_pth, but I can install coveragepy from requirements.txt just allright.

Can you fix it or help me what I am doing wrong? By the way it's on Debian Wheezy with Python 2.7.

$ pip install pytest-testmon
Downloading/unpacking pytest-testmon
  Downloading pytest-testmon-0.6.tar.gz
  Running setup.py egg_info for package pytest-testmon

Downloading/unpacking pytest>=2.7.0,<2.9 (from pytest-testmon)
  Running setup.py egg_info for package pytest

Requirement already satisfied (use --upgrade to upgrade): coverage>=3.7.1,<4.0 in ./src/coverage (from pytest-testmon)
Downloading/unpacking coverage-pth (from pytest-testmon)
  Running setup.py egg_info for package coverage-pth
    Traceback (most recent call last):
      File "<string>", line 14, in <module>
      File "/home/michal.horejsek/build/coverage-pth/setup.py", line 9, in <module>
        rel_site_packages = sprem.group(1)
    AttributeError: 'NoneType' object has no attribute 'group'
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):

  File "<string>", line 14, in <module>

  File "/home/michal.horejsek/build/coverage-pth/setup.py", line 9, in <module>

    rel_site_packages = sprem.group(1)

AttributeError: 'NoneType' object has no attribute 'group'

----------------------------------------
Command python setup.py egg_info failed with error code 1 in /home/michal.horejsek/build/coverage-pth
Storing complete log in /home/michal.horejsek/.pip/pip.log
$ pip install coverage_pth
Downloading/unpacking coverage-pth
  Running setup.py egg_info for package coverage-pth
    Traceback (most recent call last):
      File "<string>", line 14, in <module>
      File "/home/michal.horejsek/build/coverage-pth/setup.py", line 9, in <module>
        rel_site_packages = sprem.group(1)
    AttributeError: 'NoneType' object has no attribute 'group'
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):

  File "<string>", line 14, in <module>

  File "/home/michal.horejsek/build/coverage-pth/setup.py", line 9, in <module>

    rel_site_packages = sprem.group(1)

AttributeError: 'NoneType' object has no attribute 'group'

----------------------------------------
Command python setup.py egg_info failed with error code 1 in /home/michal.horejsek/build/coverage-pth
Storing complete log in /home/michal.horejsek/.pip/pip.log

Using --testmon skips --pep8 checks

I think that --testmon should be aware of tests that failed because of e.g. --pep8 (via https://pypi.python.org/pypi/pytest-pep8).

When run without --testmon it results in a failure like this:

==== FAILURES ====
____ PEP8-check ____
…/foo.py:86:80: E501 line too long (86 > 79 characters)
    Example line that is too long ....

But when using --testmon this file/test is not being tested again.

I'm not sure if this would be possible already, but in general testmon should
reconsider any file where a test failed to be included in the next run.

/cc @RonnyPfannschmidt

Tests are failing on Fedora

I'm trying to run the test suite on Fedora 23 and I get the following failures on Python 3 and 2. Any idea what's going on?

=================================================== FAILURES ====================================================
______________________________ TestCoverageSubprocess.test_coverage_expected_fail _______________________________
Traceback (most recent call last):
  File ".../test/test_subprocess.py", line 29, in test_coverage_expected_fail
    assert "Couldn't read 'nonexistent_file' as a config file" in output
AssertionError: assert "Couldn't read 'nonexistent_file' as a config file" in ''
--------------------------------------------- Captured stdout call ----------------------------------------------

____________________________________ TestCoverageSubprocess.test_subprocess _____________________________________
Traceback (most recent call last):
  File ".../test/test_subprocess.py", line 43, in test_subprocess
    assert subprocess_coverage_exists, "Dir: {}".format(os.listdir(os.getcwd()))
AssertionError: Dir: ['.testmoncoveragerc', 'subprocesstest.py']
assert False
--------------------------------------------- Captured stdout call ----------------------------------------------

_______________________________________________ test_subprocesss ________________________________________________
Traceback (most recent call last):
  File ".../test/test_testmon.py", line 88, in test_subprocesss
    deps = track_it(testdir, func)
  File ".../test/test_testmon.py", line 75, in track_it
    testmon.stop_and_save(testmon_data, testdir.tmpdir.strpath, 'testnode')
  File ".../testmon/testmon_core.py", line 107, in stop_and_save
    testmon_data.set_dependencies(nodeid, self.cov.get_data(), rootdir)
  File ".../testmon/testmon_core.py", line 245, in set_dependencies
    result[filename] = checksum_coverage(self.parse_cache(filename).blocks,[1])
  File ".../testmon/testmon_core.py", line 250, in parse_cache
    self.modules_cache[module] = Module(file_name=module)
  File ".../testmon/process_code.py", line 46, in __init__
    with open(file_name) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pytest-username/pytest-4/testdir/test_subprocesss0/testnode'
--------------------------------------------- Captured stdout call ----------------------------------------------
running ['/usr/bin/python3', '/usr/lib/python3.4/site-packages/pytest.py', '--basetemp=/tmp/pytest-username/pytest-4/testdir/test_subprocesss0/runpytest-0', 'test_a.py'] curdir= /tmp/pytest-username/pytest-4/testdir/test_subprocesss0

=================================================== FAILURES ====================================================
______________________________ TestCoverageSubprocess.test_coverage_expected_fail _______________________________
Traceback (most recent call last):
  File ".../test/test_subprocess.py", line 29, in test_coverage_expected_fail
    assert "Couldn't read 'nonexistent_file' as a config file" in output
AssertionError: assert "Couldn't read 'nonexistent_file' as a config file" in ''
--------------------------------------------- Captured stdout call ----------------------------------------------

____________________________________ TestCoverageSubprocess.test_subprocess _____________________________________
Traceback (most recent call last):
  File ".../test/test_subprocess.py", line 43, in test_subprocess
    assert subprocess_coverage_exists, "Dir: {}".format(os.listdir(os.getcwd()))
AssertionError: Dir: ['.testmoncoveragerc', 'subprocesstest.py']
assert False
--------------------------------------------- Captured stdout call ----------------------------------------------

_______________________________________________ test_subprocesss ________________________________________________
Traceback (most recent call last):
  File ".../test/test_testmon.py", line 88, in test_subprocesss
    deps = track_it(testdir, func)
  File ".../test/test_testmon.py", line 75, in track_it
    testmon.stop_and_save(testmon_data, testdir.tmpdir.strpath, 'testnode')
  File ".../testmon/testmon_core.py", line 107, in stop_and_save
    testmon_data.set_dependencies(nodeid, self.cov.get_data(), rootdir)
  File ".../testmon/testmon_core.py", line 245, in set_dependencies
    result[filename] = checksum_coverage(self.parse_cache(filename).blocks,[1])
  File ".../testmon/testmon_core.py", line 250, in parse_cache
    self.modules_cache[module] = Module(file_name=module)
  File ".../testmon/process_code.py", line 46, in __init__
    with open(file_name) as f:
IOError: [Errno 2] No such file or directory: '/tmp/pytest-username/pytest-5/testdir/test_subprocesss0/testnode'
--------------------------------------------- Captured stdout call ----------------------------------------------
running ['/usr/bin/python2', '/usr/lib/python2.7/site-packages/pytest.py', '--basetemp=/tmp/pytest-username/pytest-5/testdir/test_subprocesss0/runpytest-0', 'test_a.py'] curdir= /tmp/pytest-username/pytest-5/testdir/test_subprocesss0

Thank You.

Specyfing some code depends on a data file

Use case I'm struggling with: .sql with SQL views, that don't trigger test reruns.

Is it possible to specify that some function/class should be considered changed when a specific file changes? I'd like to invalidate everything that uses a specific ORM class that maps to a SQL view when I'm changing that view.

tf.setmtime(1424880936) test suite refactor

testmon has a basic performance optimization/caching where it stores modified times of source files and on the next run it compares current modified times (on the filesystem) with the stored ones. If they are the same, testmon assumes the files didn't change and avoids doing any more work (like parsing the source code, let alone executing the tests). The precision of file system modified time is 1 second.

On the other hand, testmon testsuite modifies source files and executes simulated tests many times a second. This would cause the performance optimization to kick in and not execute the intended test case properly.

For mitigation I usually write a random modified time to any file created or modified file like here:
https://github.com/tarpas/pytest-testmon/blob/master/test/test_testmon.py#L189

However, this is ugly and hard to understand for newcommer. There definitely is an easier or at least more readable solution.

deleting code causes an internal exception

I see this with pytest-testmon > 0.8.3 (downgrading to this version fixes it for me).

How to reproduce:

  • delete .testmondata
  • run py.test --testmon once
  • Now delete some code (in my case, it was decorated with # pragma: no cover, not sure if that's relevant)
  • call py.test --testmon again
    And look at a backtrace like the following
===================================================================================== test session starts =====================================================================================
platform linux2 -- Python 2.7.12, pytest-3.0.7, py-1.4.33, pluggy-0.4.0
Django settings: sis.settings_devel (from ini file)
testmon=True, changed files: sis/lib/python/sis/rest.py, skipping collection of 529 items, run variant: default
rootdir: /home/delgado/nobackup/git/sis/software, inifile: pytest.ini
plugins: testmon-0.9.4, repeat-0.4.1, env-0.6.0, django-3.1.2, cov-2.4.0
collected 122 items 
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 98, in wrap_session
INTERNALERROR>     session.exitstatus = doit(config, session) or 0
INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 132, in _main
INTERNALERROR>     config.hook.pytest_collection(session=session)
INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 141, in pytest_collection
INTERNALERROR>     return session.perform_collect()
INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/main.py", line 602, in perform_collect
INTERNALERROR>     config=self.config, items=items)
INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "/home/delgado/nobackup/virtualenvs/sis/lib/python2.7/site-packages/testmon/pytest_testmon.py", line 165, in pytest_collection_modifyitems
INTERNALERROR>     assert item.nodeid not in self.collection_ignored
INTERNALERROR> AssertionError: assert 'sis/lib/python/sis/modules/marvin/tests.py::TestGetEventIDsForAWID::test_should_fail_for_expired_events' not in set(['da-exchanged/lib/python/exchanged/tests/test_config.py::TestConfigOptions::test_should_have_default_from_addr', ...'da-exchanged/lib/python/exchanged/tests/test_config.py::TestGetExchanges::test_should_return_sensible_defaults', ...])
INTERNALERROR>  +  where 'sis/lib/python/sis/modules/marvin/tests.py::TestGetEventIDsForAWID::test_should_fail_for_expired_events' = <TestCaseFunction 'test_should_fail_for_expired_events'>.nodeid
INTERNALERROR>  +  and   set(['da-exchanged/lib/python/exchanged/tests/test_config.py::TestConfigOptions::test_should_have_default_from_addr', ...'da-exchanged/lib/python/exchanged/tests/test_config.py::TestGetExchanges::test_should_return_sensible_defaults', ...]) = <testmon.pytest_testmon.TestmonDeselect object at 0x7f85679b82d0>.collection_ignored

==================================================================================== 185 tests deselected =====================================================================================
=============================================================================== 185 deselected in 0.34 seconds ================================================================================

implement storing of the failures

  1. run py.test --testmon
  2. get one or more failures
  3. re-run py.test -testmon
  4. get everything deselected - no clear way to get the failures again without re-running all the tests
    expected:
    som way to either: retrieve the failure reports including stacktraces and captured output, or at least re-run all the failing tests

subprocesses support

coverage.py supports measuring subprocesses so it might be fairly easy for us to do the same.
http://nedbatchelder.com/code/coverage/subprocess.html

We'll create a special a .testmoncoveragerc file in py.test rootdir and set it's absolute path to os.environ['COVERAGE_PROCESS_START']
the .testmoncoveragerc
will have
data_file= .testmon_single_test_coverage
and we'll check for it's existence after each test execution ....
... then have coverage read and combine it

Issue with pytest-sugar

Hi,
When I try to use pytest-testmon with pytest-sugar it throws me this error:

pytest_sugar.py:71: in pytest_deselected
    pluginmanager = items[0].config.pluginmanager
 AttributeError: 'str' object has no attribute 'config

I know that this issue happens on pytest-sugar plugin but maybe you are able to figure out where the problem is.

Might deselect too much after interrupt

I've just seen that testmon deselect all items, after I've canceled the test run using KeyboardInterrupt.

I am using pytest-sugar also, and it was at 9% at that time.

Test session starts (platform: linux, Python 3.5.2, pytest 3.0.3, pytest-sugar 0.7.1)
cachedir: .cache
Django settings: velodrome.settings (from environment variable)
testmon=True, changed files: 17, skipping collection of 0 items, run variant: 60332d2767d943a82bceab2bffc7c6ea
…
^C

The next run deselected all items then:

      Test session starts (platform: linux, Python 3.5.2, pytest 3.0.3, pytest-sugar 0.7.1)
cachedir: .cache
Django settings: velodrome.settings (from environment variable)
testmon=True, changed files: 0, skipping collection of X items, run variant: 60332d2767d943a82bceab2bffc7c6ea
rootdir: …
plugins: xdist-1.15.dev9+ngba35a3d, catchlog-1.2.2, notifier-0.3, incremental-0.4.2, instafail-0.3.0, sugar-0.7.1, testmon-0.8.3, django-3.0.1.dev3+ngf18a858, random-0.2, asyncio-0.5.0, repeat-0.4.1, cov-2.4.0, mock-1.2, profiling-1.1.1, pdb-0.1.0, pylama-7.1.0, bpdb-0.1.4

==================================================== X tests deselected =====================================================

Results (0.14s):
     X deselected

I fear that this might be caused by f53d879 (or something else from the PR (#47)).

Feature Request: add option to enable saving testmondata on KeyboardInterrupt

Sometimes when building the initial .testmondata for a large project, I may need to stop the process but would like all data for tests run so far to be saved so that they at least don't need to be run again to build .testmondata.

If possible, it's be nice to have a command-line switch to enable this behavior so when I have to CTRL-C the test run and re-start it again later, I won't be starting over from scratch since building the initial .testmondata can be very slow.

Stored results not adjusted for different options (verbose, tbstyle)

Even though I am passing in -vv I get the message Use -v to get the full diff with the AssertionError on failed tests:

___ test_foo ___
project/app/tests/test_foo.py:175: in test_foo
    assert new == [
E   assert [{'asset': ..._name': None}] == [{'asset': '..._name': None}]
E     At index 2 diff: {'asset_name': None, 'asset': None} != {'asset_name': None, 'asset': 'INVALID-UUID'}
E     Use -v to get the full diff
!!! 2 errors during collection !!!

Handle BdbQuit: should not deselect tests that raised it

Tests where pdb.set_trace() was used, and then quit seem to get deselected,
and testmon only displays the traceback from the BdbQuit without re-running
it in the future:

project/app/tests/test_foo.py:2337: in test_foo
    something.assert_count(url, 1)
..…/Vcs/django-rest-framework/rest_framework/test.py:282: in get
    response = super(APIClient, self).get(path, data=data, **extra)
..…/Vcs/django-rest-framework/rest_framework/test.py:208: in get
    return self.generic('GET', path, **r)
..…/Vcs/django/django/test/client.py:411: in generic
    return self.request(**r)
..…/Vcs/django-rest-framework/rest_framework/test.py:279: in request
    return super(APIClient, self).request(**kwargs)
..…/Vcs/django-rest-framework/rest_framework/test.py:231: in request
    request = super(APIRequestFactory, self).request(**kwargs)
..…/Vcs/django/django/test/client.py:478: in request
    response = self.handler(environ)
..…/Vcs/django/django/core/handlers/exception.py:42: in inner
    response = get_response(request)
..…/Vcs/django/django/core/handlers/base.py:185: in _get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
..…/Vcs/django/django/core/handlers/base.py:185: in _get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
..…/pyenv/3.5.2/lib/python3.5/contextlib.py:30: in inner
    return func(*args, **kwds)
..…/pyenv/3.5.2/lib/python3.5/contextlib.py:30: in inner
    return func(*args, **kwds)
..…/Vcs/django/django/views/decorators/csrf.py:58: in wrapped_view
    return view_func(*args, **kwargs)
..…/Vcs/django-rest-framework/rest_framework/viewsets.py:83: in view
    return self.dispatch(request, *args, **kwargs)
..…/Vcs/django-rest-framework/rest_framework/views.py:480: in dispatch
    response = handler(request, *args, **kwargs)
..…/Vcs/django-rest-framework/rest_framework/views.py:443: in handle_exception
    self.raise_uncaught_exception(exc)
..…/Vcs/django-rest-framework/rest_framework/views.py:480: in dispatch
    response = handler(request, *args, **kwargs)
..…/Vcs/django-rest-framework/rest_framework/mixins.py:45: in list
    return self.get_paginated_response(serializer.data)
..…/Vcs/django-rest-framework/rest_framework/serializers.py:735: in data
    ret = super(ListSerializer, self).data
..…/Vcs/django-rest-framework/rest_framework/serializers.py:262: in data
    self._data = self.to_representation(self.instance)
..…/Vcs/django-rest-framework/rest_framework/serializers.py:653: in to_representation
    self.child.to_representation(item) for item in iterable
..…/Vcs/django-rest-framework/rest_framework/serializers.py:653: in <listcomp>
    self.child.to_representation(item) for item in iterable
..…/Vcs/django-rest-framework/rest_framework/serializers.py:493: in to_representation
    attribute = field.get_attribute(instance)
..…/Vcs/django-rest-framework/rest_framework/fields.py:446: in get_attribute
    return get_attribute(instance, self.source_attrs)
..…/Vcs/django-rest-framework/rest_framework/fields.py:103: in get_attribute
    instance = getattr(instance, attr)
project/app/models.py:3283: in default_duration
    pdb.set_trace()
project/app/models.py:3283: in default_duration
    pdb.set_trace()
..…/pyenv/3.5.2/lib/python3.5/bdb.py:48: in trace_dispatch
    return self.dispatch_line(frame)
..…/pyenv/3.5.2/lib/python3.5/bdb.py:67: in dispatch_line
    if self.quitting: raise BdbQuit
E   bdb.BdbQuit

testmon can't be loaded in pytest 3.1

testmon does not work with pytest 3.1, what's missing to make testmon pytest 3.1 compatible? Currently the plugin cannot be loaded on test execution.

_pytest.vendored_packages.pluggy.PluginValidationError: Plugin 'testmon' could not be loaded:
 (pytest 3.1.0 (/.../lib/python3.4/site-packages), Requirement.parse('pytest<3.1,>=2.8.0'))!

computing changes of installed libraries

coverage.py tracking has some overhead so it will only be turned on for projects "test" and "source" directories.

The rest of imported python files, the libraries, are invisible for coverage.py (and testmon). If they change ( e.g. through pip install --upgrade some_library ) testmon doesn't know about it. Most of the time they don't change so we could live with this.
But sometimes some tests do test bugs only present against specific version of libraries and then testmon would break.
Let's create a function whhich computes one hash representing all the files in all the libraries (including standard library) taking into account just os.stat()/scandir info.

docstring changes

It seems that just changing the docstring of a class or function is enough to trigger the re-execution of all tests that use the class/object.

I would expect to only re-execute tests that access the MyClass.__doc__

sqlite3.OperationalError: database is locked

Hi, I got this weird error when using testmon and pytest-watcher together. Both work fine individually but as you can see below, they somehow interfere with each other.

The error is in testmon, hence the issue here. I'm stumped, and the really funny thing is that it used to work just a couple of days ago so I'm not sure what happened.

Terminal output
$ ptw -- --testmon import_contacts
Running: py.test --testmon import_contacts
Traceback (most recent call last):
  File "/Users/maxno/.pyenv/versions/venvs/15a88226-efa3-11e6-afd5-475fa2145b46/bin/py.test", line 11, in <module>
    sys.exit(main())
  File "/Users/maxno/.pyenv/versions/venvs/15a88226-efa3-11e6-afd5-475fa2145b46/lib/python3.6/site-packages/_pytest/config.py", line 58, in main
    return config.hook.pytest_cmdline_main(config=config)
  File "/Users/maxno/.pyenv/versions/venvs/15a88226-efa3-11e6-afd5-475fa2145b46/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 745, in __call__
    return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
  File "/Users/maxno/.pyenv/versions/venvs/15a88226-efa3-11e6-afd5-475fa2145b46/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "/Users/maxno/.pyenv/versions/venvs/15a88226-efa3-11e6-afd5-475fa2145b46/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
    _MultiCall(methods, kwargs, hook.spec_opts).execute()
  File "/Users/maxno/.pyenv/versions/venvs/15a88226-efa3-11e6-afd5-475fa2145b46/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
    res = hook_impl.function(*args)
  File "/Users/maxno/.pyenv/versions/venvs/15a88226-efa3-11e6-afd5-475fa2145b46/lib/python3.6/site-packages/testmon/pytest_testmon.py", line 107, in pytest_cmdline_main
    init_testmon_data(config)
  File "/Users/maxno/.pyenv/versions/venvs/15a88226-efa3-11e6-afd5-475fa2145b46/lib/python3.6/site-packages/testmon/pytest_testmon.py", line 98, in init_testmon_data
    variant=variant)
  File "/Users/maxno/.pyenv/versions/venvs/15a88226-efa3-11e6-afd5-475fa2145b46/lib/python3.6/site-packages/testmon/testmon_core.py", line 205, in __init__
    self.init_connection()
  File "/Users/maxno/.pyenv/versions/venvs/15a88226-efa3-11e6-afd5-475fa2145b46/lib/python3.6/site-packages/testmon/testmon_core.py", line 221, in init_connection
    self._check_data_version()
  File "/Users/maxno/.pyenv/versions/venvs/15a88226-efa3-11e6-afd5-475fa2145b46/lib/python3.6/site-packages/testmon/testmon_core.py", line 227, in _check_data_version
    self._write_attribute(self._DATA_VERSION_KEY, str(self.DATA_VERSION))
  File "/Users/maxno/.pyenv/versions/venvs/15a88226-efa3-11e6-afd5-475fa2145b46/lib/python3.6/site-packages/testmon/testmon_core.py", line 271, in _write_attribute
    [compressed_data_buffer, dataid])
sqlite3.OperationalError: database is locked
^C
$ pytest --testmon import_contacts/
=========================================================================================== test session starts ============================================================================================
platform darwin -- Python 3.6.0, pytest-3.1.1, py-1.4.33, pluggy-0.4.0
Using --randomly-seed=1497255222
Django settings: referanza.settings.local_settings (from ini file)
testmon=True, changed files: 0, skipping collection of 0 items, run variant: default
rootdir: /Users/maxno/Code/referanza, inifile: pytest.ini
plugins: xdist-1.16.0, translations-1.0.4, timeout-1.2.0, testmon-0.9.5, reqs-0.0.5, regtest-0.15.1, randomly-1.1.2, notifier-1.0.0, mock-1.6.0, instafail-0.3.0, gitignore-1.3, django-3.1.2, datadir-ng-1.1.0, cov-2.5.1, bpdb-0.1.4, bdd-2.18.2, hypothesis-3.11.1, celery-4.0.2
collected 11 items

import_contacts/tests/test_inner_serializer_meta_class.py ......
import_contacts/tests/test_fuzzy_assertion.py ..
import_contacts/tests/test_import_session.py .
import_contacts/tests/test_phonenumber_validator.py ..

======================================================================================== 11 passed in 0.56 seconds =========================================================================================
$

Documented API to query the testmon database for information

It would be cool if we could query the testmon database for information. A case I'd like is to ask it which tests ran for a certain line of the code. This would be useful when you have a piece of code you suspect is broken and just want to read all the tests that massage that line. The workaround now is to change the line and observe which tests are rerun.

"corruption" of .testmondata by an external exception (eg. Database connection or wrong environment)

Test node data gets saved after the test finishes (via set_dependencies in

testmon_data.set_dependencies(nodeid, testmon_data.get_nodedata(nodeid, self.cov.get_data(), rootdir), result)
), but when you then abort the test run using SIGINT (or -x triggers it), TestmonData.write_data is not skipped, which will not save the corresponding mtimes and file_checksums metadata.
(via #60 (comment))

I think set_dependencies should really only set them, but then be skipped also when not writing mtimes, or the mtimes should be updated together with the dependencies.

Only consider paths passed as args to py.test (track_changed_files)

track_changed_files looks at all dirs and .py files in the project_directory.

But it should only consider the files/dirs passed as arguments to py.test for performance reasons.

E.g. py.test --testmon foo/bar should not scan all dirs/files from ".".

btw: is there a reason for the a specific / separate project_directory setting?
If this is for handling relative paths, couldn't the pytest-cache locations (for example) be used for this?

specifying explicit dependency of a test case on a file

This would be needed for data files or other files which influence the execution of tests but don't contain normal python lines of code.

One option how to specify the dependency of a test would be to preceed them with a decorator.

@testmon.depends_on('../testmon/plugin.py')
def test_example(....

Another possible option would be a pragma option.

testmon has to merge this explicit information to the dependency data acquired from coverage.py

In the first phase the granularity of a whole file would need to suffice. Whenever the file modify time changes, dependent tests would be re-executed.

FileNotFoundError in process_code.py with jinja

I wanted to try testmon out of curiosity, and tried running it on https://github.com/hackebrot/qutebrowser

To reproduce, patch tox.ini like this, then run tox -e unittests:

diff --git a/tox.ini b/tox.ini
index 101d2b0..30af07d 100644
--- a/tox.ini
+++ b/tox.ini
@@ -22,6 +22,7 @@ deps =
     pytest-capturelog==0.7
     pytest-qt==1.3.0
     pytest-mock==0.4.3
+    testmon
 # We don't use {[testenv:mkvenv]commands} here because that seems to be broken
 # on Ubuntu Trusty.
 commands =

Without testmon, the tests run correctly, with testmon, I get:

____________________________________________________________________ test_simple_template ____________________________________________________________________

self = <testmon.pytest_testmon.TestmonDeselect object at 0x7f89b624bba8>
__multicall__ = <MultiCall 0 results, 0 meths, kwargs={'__multicall__': <MultiCall 0 results, 0 meths, kwargs={...}>, 'item': <Function 'test_simple_template'>}>
item = <Function 'test_simple_template'>

    def pytest_runtest_call(self, __multicall__, item):
>       result = self.testmon.track_execute(__multicall__.execute, item.nodeid)

.tox/unittests/lib/python3.4/site-packages/testmon/pytest_testmon.py:125: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.tox/unittests/lib/python3.4/site-packages/testmon/testmon_core.py:106: in track_execute
    self.set_dependencies(nodeid, self.cov.data)
.tox/unittests/lib/python3.4/site-packages/testmon/testmon_core.py:92: in set_dependencies
    result[filename] = checksum_coverage(self.parse_cache(filename), value.keys())
.tox/unittests/lib/python3.4/site-packages/testmon/testmon_core.py:46: in parse_cache
    self.modules_cache[module] = Module(file_name=module).blocks
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <testmon.process_code.Module object at 0x7f89ae4f9978>, source_code = None, file_name = '/home/florian/proj/qutebrowser/hackebrot-git/html/test.html'

    def __init__(self, source_code=None, file_name='<unknown>'):
        self.blocks = []
        if source_code is None:
>           with open(file_name) as f:
                source_code = f.read()
E               FileNotFoundError: [Errno 2] No such file or directory: '/home/florian/proj/qutebrowser/hackebrot-git/html/test.html'

.tox/unittests/lib/python3.4/site-packages/testmon/process_code.py:45: FileNotFoundError
_________________________________________________________________________ test_utf8 __________________________________________________________________________

self = <testmon.pytest_testmon.TestmonDeselect object at 0x7f89b624bba8>
__multicall__ = <MultiCall 0 results, 0 meths, kwargs={'__multicall__': <MultiCall 0 results, 0 meths, kwargs={...}>, 'item': <Function 'test_utf8'>}>
item = <Function 'test_utf8'>

    def pytest_runtest_call(self, __multicall__, item):
>       result = self.testmon.track_execute(__multicall__.execute, item.nodeid)

.tox/unittests/lib/python3.4/site-packages/testmon/pytest_testmon.py:125: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
.tox/unittests/lib/python3.4/site-packages/testmon/testmon_core.py:106: in track_execute
    self.set_dependencies(nodeid, self.cov.data)
.tox/unittests/lib/python3.4/site-packages/testmon/testmon_core.py:92: in set_dependencies
    result[filename] = checksum_coverage(self.parse_cache(filename), value.keys())
.tox/unittests/lib/python3.4/site-packages/testmon/testmon_core.py:46: in parse_cache
    self.modules_cache[module] = Module(file_name=module).blocks
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <testmon.process_code.Module object at 0x7f895acb6390>, source_code = None, file_name = '/home/florian/proj/qutebrowser/hackebrot-git/html/test.html'

    def __init__(self, source_code=None, file_name='<unknown>'):
        self.blocks = []
        if source_code is None:
>           with open(file_name) as f:
                source_code = f.read()
E               FileNotFoundError: [Errno 2] No such file or directory: '/home/florian/proj/qutebrowser/hackebrot-git/html/test.html'

.tox/unittests/lib/python3.4/site-packages/testmon/process_code.py:45: FileNotFoundError

It seems testmon is trying to read html/test.html which is a file which doesn't actually exist - in the test it's used via jinja and a patched jinja.utils.read_file. Full test source of the failing tests:

import os.path

import pytest

from qutebrowser.utils import jinja


@pytest.fixture(autouse=True)
def patch_read_file(monkeypatch):
    """pytest fixture to patch utils.read_file."""
    def _read_file(path):
        """A read_file which returns a simple template if the path is right."""
        if path == os.path.join('html', 'test.html'):
            return """Hello {{var}}"""
        else:
            raise ValueError("Invalid path {}!".format(path))

    monkeypatch.setattr('qutebrowser.utils.jinja.utils.read_file', _read_file)


def test_simple_template():
    """Test with a simple template."""
    template = jinja.env.get_template('test.html')
    # https://bitbucket.org/logilab/pylint/issue/490/
    data = template.render(var='World')  # pylint: disable=no-member
    assert data == "Hello World"


def test_utf8():
    """Test rendering with an UTF8 template.

    This was an attempt to get a failing test case for #127 but it seems
    the issue is elsewhere.

    https://github.com/The-Compiler/qutebrowser/issues/127
    """
    template = jinja.env.get_template('test.html')
    # https://bitbucket.org/logilab/pylint/issue/490/
    data = template.render(var='\u2603')  # pylint: disable=no-member
    assert data == "Hello \u2603"

Invalid call to config.hook.pytest_deselected via pytest_ignore_collect

The call of the pytest_deselected hook in

config.hook.pytest_deselected(items=['1'] * self.testmon_data.unaffected_paths[strpath])
seems to invalid.

def pytest_ignore_collect(self, path, config):
    strpath = path.strpath
    if strpath in self.testmon_data.unaffected_paths:
        config.hook.pytest_deselected(items=['1'] * self.testmon_data.unaffected_paths[strpath])
        return True

pytest-sugar expects a list of items there, not strings.

This causes the following crash:

Test session starts (platform: linux, Python 3.5.1, pytest 2.8.7, pytest-sugar 0.5.1)
django settings: velodrome.settings (from environment variable)
testmon=True, changed files: velodrome/lock8/tests/test_api.py, skipping collection of 173 items, run variant: 634c62a1a58db41492c603ad92194004
rootdir: /home/daniel/projects/lock8/velodrome, inifile: setup.cfg
plugins: django-2.9.2.dev15+ng4c0c366, sugar-0.5.1, cov-2.2.1, testmon-0.7, mock-0.10.1, xdist-1.13.1

――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― ERROR collecting  ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――
../../../.pyenv/versions/velodrome/lib/python3.5/site-packages/testmon/pytest_testmon.py:161: in pytest_ignore_collect
    config.hook.pytest_deselected(items=['1'] * self.testmon_data.unaffected_paths[strpath])
../../../.pyenv/versions/velodrome/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:724: in __call__
    return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
../../../.pyenv/versions/velodrome/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:338: in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
../../../.pyenv/versions/velodrome/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:333: in <lambda>
    _MultiCall(methods, kwargs, hook.spec_opts).execute()
../../../.pyenv/versions/velodrome/lib/python3.5/site-packages/_pytest/vendored_packages/pluggy.py:596: in execute
    res = hook_impl.function(*args)
../../../Vcs/pytest-sugar/pytest_sugar.py:73: in pytest_deselected
    pluginmanager = items[0].config.pluginmanager
E   AttributeError: 'str' object has no attribute 'config'

================================================================================== short test summary info ===================================================================================
ERROR 

Results (0.13s):
       1 deselected

The pytest-sugar code is at: https://github.com/Frozenball/pytest-sugar/blob/56210bacff3d5ad2568c0cf163f3a256bf3500f9/pytest_sugar.py#L73.

-k expression is ignored when not using --tlf

pytest --testmon -k foo should only consider tests matching "foo", but it will display the recorded result for all failed tests?!
Only when used with --tlf -k seems to be respected.

per-method granularity

In current version we are tracking changes on file/module level. That is fine for many projects but suboptimal if you had a lot of functionality split into only a couple of files (e.g. pytest itself is a good example)
And editing of test methods was kinda annoying because, every test change triggered execution of all the tests in a file.
Of course coverage.py provides line granularity so it should be possible (if somewhat messy) to be more precise. But, in Python, you can break any execution unit without changing any of it's lines, by simply adding statemets after the block. Would it be possible to mitigate this by simply adding adjecent (non whitespace) lines to the change tracking? Maybe.
For now I have an ugly-code experiment which uses python "ast".
https://www.youtube.com/watch?v=IJRdQKumCuE

Does not get path to database file correctly with "/tmp" as additional dir/arg

% py.test project /tmp/envdir_tty
Traceback (most recent call last):
  File "…/pyenv/3.5.1/envs/tmp-3.5.1-2UJvnn/bin/py.test", line 9, in <module>
    load_entry_point('pytest', 'console_scripts', 'py.test')()
  File "…/Vcs/pytest/_pytest/config.py", line 49, in main
    return config.hook.pytest_cmdline_main(config=config)
  File "…/Vcs/pytest/_pytest/vendored_packages/pluggy.py", line 727, in __call__
    return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
  File "…/Vcs/pytest/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "…/Vcs/pytest/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
    _MultiCall(methods, kwargs, hook.spec_opts).execute()
  File "…/Vcs/pytest/_pytest/vendored_packages/pluggy.py", line 599, in execute
    res = hook_impl.function(*args)
  File "…/Vcs/pytest-testmon/testmon/pytest_testmon.py", line 81, in pytest_cmdline_main
    init_testmon_data(config)
  File "…/Vcs/pytest-testmon/testmon/pytest_testmon.py", line 75, in init_testmon_data
    variant=variant)
  File "…/Vcs/pytest-testmon/testmon/testmon_core.py", line 150, in __init__
    self.init_connection()
  File "…/Vcs/pytest-testmon/testmon/testmon_core.py", line 172, in init_connection
    self.connection = sqlite3.connect(self.datafile)
sqlite3.OperationalError: unable to open database file

self.datafile ends up being /.testmondata.

Might be related to pytest-dev/pytest#1594.

crash with non utf-8 encoding .py files

Eg.
lib/python3.4/site-packages/dateutil/parser.py
is supposedly # -- coding:iso-8859-1 --
testmon crashes
with:
File "/Users/tibor/tmonworkspace/testmon/testmon/process_code.py", line 47, in init
INTERNALERROR> source_code = f.read()
INTERNALERROR> File "/Users/tibor/tmonworkspace/warehouse/.tox/py34/bin/../lib/python3.4/codecs.py", line 319, in decode
INTERNALERROR> (result, consumed) = self._buffer_decode(data, self.errors, final)

Absolute paths in values :(

>>> from testmon.testmon_core import *
>>> t = TestmonData('.')
>>> t.init_connection()
>>> t.read_data()
>>> t.node_data.keys()[0]
u'project/unittest/test_foo.py::test_foo'
>>> t.node_data.values()[0].keys()[0]
u'/data/bamboo/bamboo-agent-home/xml-data/build-dir/XXX/foo/bar/__init__.py'

The node_data keys are relative paths, but the node_data data is absolute paths. It would be great if they were both relative.

My use case:

We have a big and slow test suite that is unpractical to rerun on all developers machines all the time. So we've got the CI environment to create the testmon database periodically and a little script to download this cache unto local developers machines so running testmon is super fast and nice.

I'm going to try to work around the problem right now by going through the entire dataset and changing the paths, but it would be nice if the paths were relative from the beginning so I didn't have to.

Performance issue on big projects

We have a big project with a big test suite. When starting pytest with testmon enabled it takes something like 8 minutes just to start when running (almost) no tests. A profile dump reveals this:

Wed Dec  7 14:37:13 2016    testmon-startup-profile

         353228817 function calls (349177685 primitive calls) in 648.684 seconds

   Ordered by: cumulative time
   List reduced from 15183 to 100 due to restriction <100>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.001    0.001  648.707  648.707 env/bin/py.test:3(<module>)
 10796/51    0.006    0.000  648.614   12.718 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:335(_hookexec)
 10796/51    0.017    0.000  648.614   12.718 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:332(<lambda>)
 11637/51    0.063    0.000  648.614   12.718 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:586(execute)
        1    0.000    0.000  648.612  648.612 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/config.py:29(main)
  10596/2    0.016    0.000  648.612  324.306 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py:722(__call__)
        1    0.000    0.000  562.338  562.338 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/pytest_testmon.py:80(pytest_cmdline_main)
        1    0.000    0.000  562.338  562.338 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/pytest_testmon.py:70(init_testmon_data)
        1    0.004    0.004  562.338  562.338 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:258(read_fs)
     4310    1.385    0.000  545.292    0.127 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:224(test_should_run)
     4310    3.995    0.001  542.647    0.126 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:229(<dictcomp>)
  4331550   54.292    0.000  538.652    0.000 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/process_code.py:104(checksums)
        1    0.039    0.039  537.138  537.138 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/testmon_core.py:273(compute_unaffected)
 73396811   67.475    0.000  484.571    0.000 /Users/andersh/triresolve/env/lib/python2.7/site-packages/testmon/process_code.py:14(checksum)
 73396871  360.852    0.000  360.852    0.000 {method 'encode' of 'str' objects}
        1    0.000    0.000   83.370   83.370 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/main.py:118(pytest_cmdline_main)
        1    0.000    0.000   83.370   83.370 /Users/andersh/triresolve/env/lib/python2.7/site-packages/_pytest/main.py:118(pytest_cmdline_main)

as you can see the last line is 80 seconds cumulative, but the two lines above are 360 and 484 respectively.

This hurts our use case a LOT, and since we use a reference .testmondata file that has been produced by a CI job, it seems excessive (and useless) to recalculate this on each machine when it could be calculated once up front.

So, what do you guys think about caching this data in .testmondata?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.