Giter VIP home page Giter VIP logo

pytest-xdist's Introduction

pytest-xdist

PyPI version Python versions

The pytest-xdist plugin extends pytest with new test execution modes, the most used being distributing tests across multiple CPUs to speed up test execution:

pytest -n auto

With this call, pytest will spawn a number of workers processes equal to the number of available CPUs, and distribute the tests randomly across them.

Documentation

Documentation is available at Read The Docs.

pytest-xdist's People

Contributors

aadamson avatar amezin avatar asottile avatar baekdohyeop avatar bluetech avatar blueyed avatar brandonhoffman avatar bubenkoff avatar dependabot[bot] avatar ethanhs avatar feuillemorte avatar flub avatar graingert avatar hpk42 avatar hugovk avatar jjh42 avatar mgorny avatar nicoddemus avatar pre-commit-ci[bot] avatar reginaldl avatar ronnypfannschmidt avatar skirpichev avatar svenstaro avatar tboshoven avatar thedrow avatar utapyngo avatar verdesmarald avatar wronglink avatar xoviat avatar zac-hd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytest-xdist's Issues

Parallelization of the parameters

How to run a parameter in different processes?
Example:

myparam = ['server1', 'server2']
@pytest.fixture(scope='session', params=myparam)
def param_session(request):
    return request.param

How to do that would each option run in a separate process?
For example:

The process "gw1" is started for the parameter "server1"
The process "gw2" is started for the parameter "server2"

How?
Thanks for the help

Non-initial conftest.py not run with output capturing.

I normally run my test suite using the -n <num> option. My setup.cfg was previously set to pass a fixed list of paths as arguments to py.test via the addopts setting. I changed setup.cfg to no longer use a fixed list of paths (instead using the python_files glob option) in order to allow paths to be specified normally. After this change, I found that output from the logging module was no longer captured.

The logging module is initialized with basicConfig upon import of a log module inside the application package. This is unavoidable. The log module is indirectly imported by a conftest.py module. This is not unavoidable, just very inconvenient to work around; besides, I want to see the log output along with the results of failing tests.

Further investigation yielded some confusing results.

To reproduce the behavior, I created a test suite structured like this:

tests/
  foo/
    conftest.py
    test_foo.py
 bar/
    test_bar.py

No setup.cfg or similar is present.

foo/conftest.py initializes the logging module using basicConfig. Both foo/test_foo.py and bar/test_bar.py contain a test function that calls the logging module to log a single message, "Hello World". Under certain conditions, the "Hello World" message is not captured for any test, but sent to stderr instead. I have narrowed down those conditions to these invocations of py.test with capturing results:

  • py.test without -n <num>, regardless of other arguments: Captured
  • py.test -n 3 without other arguments: Not captured
  • py.test -n 3 tests: Not captured
  • py.test -n 3 tests/foo: Captured
  • py.test -n 3 tests/foo/test_foo.py: Captured
  • py.test -n 3 tests/bar: Captured
  • py.test -n 3 tests/bar/test_bar.py: Captured

test item scheduling does not schedule N items to N workers

I may just be misunderstanding how xdist works, but I can't seem to get the number of workers I requested to all accept tasks in parallel:

$ py.test -v -n 8
================================ test session starts =================================
platform darwin -- Python 3.5.1, pytest-2.8.5, py-1.4.31, pluggy-0.3.1 -- .../pytest-xdist-testcase/venv/bin/python3
cachedir: .cache
rootdir: .../pytest-xdist-testcase, inifile: 
plugins: xdist-1.13.1
[gw0] darwin Python 3.5.1 cwd: .../pytest-xdist-testcase
[gw1] darwin Python 3.5.1 cwd: .../pytest-xdist-testcase
[gw2] darwin Python 3.5.1 cwd: .../pytest-xdist-testcase
[gw3] darwin Python 3.5.1 cwd: .../pytest-xdist-testcase
[gw4] darwin Python 3.5.1 cwd: .../pytest-xdist-testcase
[gw5] darwin Python 3.5.1 cwd: .../pytest-xdist-testcase
[gw6] darwin Python 3.5.1 cwd: .../pytest-xdist-testcase
[gw7] darwin Python 3.5.1 cwd: .../pytest-xdist-testcase
[gw0] Python 3.5.1 (default, Dec  7 2015, 21:59:10)  -- [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)]
[gw1] Python 3.5.1 (default, Dec  7 2015, 21:59:10)  -- [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)]
[gw2] Python 3.5.1 (default, Dec  7 2015, 21:59:10)  -- [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)]
[gw3] Python 3.5.1 (default, Dec  7 2015, 21:59:10)  -- [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)]
[gw4] Python 3.5.1 (default, Dec  7 2015, 21:59:10)  -- [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)]
[gw5] Python 3.5.1 (default, Dec  7 2015, 21:59:10)  -- [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)]
[gw6] Python 3.5.1 (default, Dec  7 2015, 21:59:10)  -- [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)]
[gw7] Python 3.5.1 (default, Dec  7 2015, 21:59:10)  -- [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.1.76)]
gw0 [8] / gw1 [8] / gw2 [8] / gw3 [8] / gw4 [8] / gw5 [8] / gw6 [8] / gw7 [8]
scheduling tests via LoadScheduling

test_tests.py::test_1[0] 
test_tests.py::test_1[2] 
test_tests.py::test_2[0] 
test_tests.py::test_2[2] 
[gw0] PASSED test_tests.py::test_1[2] 
[gw7] PASSED test_tests.py::test_1[0] 
[gw2] PASSED test_tests.py::test_2[0] 
[gw4] PASSED test_tests.py::test_2[2] 
test_tests.py::test_2[1] 
test_tests.py::test_1[3] 
test_tests.py::test_1[1] 
test_tests.py::test_2[3] 
[gw7] PASSED test_tests.py::test_1[1] 
[gw0] PASSED test_tests.py::test_1[3] 
[gw4] PASSED test_tests.py::test_2[3] 
[gw2] PASSED test_tests.py::test_2[1] 

============================= 8 passed in 64.12 seconds ==============================

Watching the output as it comes, it looks like the second block of 4 tests doesn't start running until the first block of 4 completes.

Am I misunderstanding something? Why don't all 8 workers start running tests at the same time? Why doesn't gw1, for example, get any tests?

Poor interaction between `-n#` and `--collect-only`

Specifying e.g. -n8 to py.test causes tests to run regardless of whether or not the --collect-only flag is set (version is latest from PyPI as of this post; 1.13.1). This was attempted by invoking py.test directly rather than by using e.g. a setup.py shim. This is a paper-cut.

Internal error when a fixture raises an exception

Using the following code in test_import.py:

import pytest

@pytest.fixture(params=['1', '2'])
def crasher():
    raise RuntimeError

def test_aaa0(crasher):
    pass
def test_aaa1(crasher):
    pass

Running as py.test -n 1 --maxfail 1 produces:

============================= test session starts =============================
platform win32 -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: E:\tmp, inifile:
plugins: xdist-1.14
gw0 [4]
scheduling tests via LoadScheduling
EINTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "c:\users\bjones\downloads\anaconda3\lib\site-packages\_pytest\main.py", line 94, in wrap_session
INTERNALERROR>     session.exitstatus = doit(config, session) or 0
INTERNALERROR>   File "c:\users\bjones\downloads\anaconda3\lib\site-packages\_pytest\main.py", line 125, in _main
INTERNALERROR>     config.hook.pytest_runtestloop(session=session)
INTERNALERROR>   File "c:\users\bjones\downloads\anaconda3\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 724, in __call__
INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR>   File "c:\users\bjones\downloads\anaconda3\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 338, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "c:\users\bjones\downloads\anaconda3\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 333, in <lambda>
INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR>   File "c:\users\bjones\downloads\anaconda3\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 596, in execute
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "<remote exec>", line 49, in pytest_runtestloop
INTERNALERROR>   File "c:\users\bjones\downloads\anaconda3\lib\site-packages\execnet\gateway_base.py", line 737, in receive
INTERNALERROR>     raise self._getremoteerror() or EOFError()
INTERNALERROR> EOFError

=================================== ERRORS ====================================
_______________________ ERROR at setup of test_aaa0[1] ________________________
[gw0] win32 -- Python 3.5.1 c:\users\bjones\downloads\anaconda3\python.exe
@pytest.fixture(params=['1', '2'])
    def crasher():
>       raise RuntimeError
E       RuntimeError

test_import.py:5: RuntimeError
!!!!!!!!!!!! xdist.dsession.Interrupted: stopping after 1 failures !!!!!!!!!!!!
=========================== 1 error in 0.59 seconds ===========================```

Removing either the --maxfail or the -n flag avoids the INTERNALERROR.

Invalid output with parameter --durations from pytest

I try to find the slowest test in my testing suite with the parameter --durations=2 in py.test

The problem is i received that output

========================== slowest 2 test durations ===========================
build 08-Jul-2016 08:46:49 0.04s setup >models/test_stack_template.py::TestStackIntegration::test_render_create_from_snapshot_with_validation
build 08-Jul-2016 08:46:49 0.01s setup >models/test_stack_controller.py::TestStackController::test_delete_a_new_stack_during_update
=================================== FAILURES ===================================

Is there a equivalent for xdist or is it possible to send the duration back to py.test so that it is proper displayed?

The duration of a single tests are usually between 3-7 minutes

xdist is incompatible with built-in pytest maxfail argument

Steps to reproduce:

  • Create test_deleteme.py in an otherwise empty directory (e.g. /tmp/deleteme):
import time

def test_0 ():
    assert False

def test_1 ():
    time.sleep(1)

def test_2 ():
    pass

def test_3 ():
    pass

def test_4 ():
    pass
  • Change to the test directory (cd /tmp/deleteme).
  • Run py.test -x -n2 or py.test --maxfail=1 -n2.

The output should show that one test has failed, and it does. However, it should not show INTERNALERROR>. See the full output:

/tmp/deleteme$ py.test -x -n2
================================================ test session starts ================================================
platform linux2 -- Python 2.7.9, pytest-2.9.1, py-1.4.31, pluggy-0.3.1
rootdir: /tmp/deleteme, inifile: 
plugins: xdist-1.14
gw0 [5] / gw1 [5]
scheduling tests via LoadScheduling
FINTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "/usr/local/lib/python2.7/dist-packages/_pytest/main.py", line 94, in wrap_session
INTERNALERROR>     session.exitstatus = doit(config, session) or 0
INTERNALERROR>   File "/usr/local/lib/python2.7/dist-packages/_pytest/main.py", line 125, in _main
INTERNALERROR>     config.hook.pytest_runtestloop(session=session)
INTERNALERROR>   File "/usr/local/lib/python2.7/dist-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR>   File "/usr/local/lib/python2.7/dist-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "/usr/local/lib/python2.7/dist-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR>   File "/usr/local/lib/python2.7/dist-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "<remote exec>", line 49, in pytest_runtestloop
INTERNALERROR>   File "/usr/local/lib/python2.7/dist-packages/execnet/gateway_base.py", line 737, in receive
INTERNALERROR>     raise self._getremoteerror() or EOFError()
INTERNALERROR> EOFError
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "/usr/local/lib/python2.7/dist-packages/_pytest/main.py", line 94, in wrap_session
INTERNALERROR>     session.exitstatus = doit(config, session) or 0
INTERNALERROR>   File "/usr/local/lib/python2.7/dist-packages/_pytest/main.py", line 125, in _main
INTERNALERROR>     config.hook.pytest_runtestloop(session=session)
INTERNALERROR>   File "/usr/local/lib/python2.7/dist-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR>   File "/usr/local/lib/python2.7/dist-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "/usr/local/lib/python2.7/dist-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR>   File "/usr/local/lib/python2.7/dist-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "<remote exec>", line 49, in pytest_runtestloop
INTERNALERROR>   File "/usr/local/lib/python2.7/dist-packages/execnet/gateway_base.py", line 737, in receive
INTERNALERROR>     raise self._getremoteerror() or EOFError()
INTERNALERROR> EOFError

===================================================== FAILURES ======================================================
______________________________________________________ test_0 _______________________________________________________
[gw0] linux2 -- Python 2.7.9 /usr/bin/python
def test_0 ():
>       assert False
E       assert False

test_deleteme.py:5: AssertionError
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
============================================= 1 failed in 1.55 seconds ==============================================

Slaves crash on win64 with error "Not properly terminated"

Following on to #68 - after setting the PYTEST_DEBUG environment variable, I get the following output when I run using more than 1 process for example:

py.test -v  -n 3 -m regression --max-slave-restart=0 [redacted]

Slave restarting disabled
      pytest_testnodedown [hook]
          node: <SlaveController gw1>
          error: Not properly terminated
[gw1] node down: Not properly terminated
      finish pytest_testnodedown --> [] [hook]
      pytest_runtest_logreport [hook]
          report: <TestReport 'redacted.py::Redacted::test_eta2' when='???' outcome='failed'>
        pytest_report_teststatus [hook]
            report: <TestReport 'redacted.py::Redacted::test_eta2' when='???' outcome='failed'>
        finish pytest_report_teststatus --> ('failed', 'f', 'FAILED') [hook]
[gw1] FAILED redacted.py::Redacted::test_eta2       finish pytest_runtest_logreport --> [] [hook]

The set of tests that fail is different upon different runs, but all failures are with the above error (Not properly terminated).

versions:

pytest                    2.9.1                    py27_0    
pytest-xdist              1.13.1                   py27_0    

Disabling "looponfail" breaks xdist

xdist breaks if it is executed with looponfail disabled:

$ py.test -n2 -p no:xdist.looponfail
Traceback (most recent call last):
  File "x:\pytest-xdist\.env35\lib\site-packages\_pytest\config.py", line 1040, in getoption
    val = getattr(self.option, name)
AttributeError: 'CmdOptions' object has no attribute 'looponfail'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\Programming\Python35\lib\runpy.py", line 170, in _run_module_as_main
    "__main__", mod_spec)
  File "D:\Programming\Python35\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "X:\pytest-xdist\.env35\Scripts\py.test.exe\__main__.py", line 9, in <module>
  File "x:\pytest-xdist\.env35\lib\site-packages\_pytest\config.py", line 48, in main
    return config.hook.pytest_cmdline_main(config=config)
  File "x:\pytest-xdist\.env35\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 724, in __call__
    return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
  File "x:\pytest-xdist\.env35\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 338, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "x:\pytest-xdist\.env35\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 333, in <lambda>
    _MultiCall(methods, kwargs, hook.spec_opts).execute()
  File "x:\pytest-xdist\.env35\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 596, in execute
    res = hook_impl.function(*args)
  File "x:\pytest-xdist\xdist\plugin.py", line 98, in pytest_cmdline_main
    if val("looponfail"):
  File "x:\pytest-xdist\.env35\lib\site-packages\_pytest\config.py", line 1054, in getvalue
    return self.getoption(name)
  File "x:\pytest-xdist\.env35\lib\site-packages\_pytest\config.py", line 1050, in getoption
    raise ValueError("no option named %r" % (name,))
ValueError: no option named 'looponfail'

Since we plan to make looponfail and boxed separate plugins, xdist should work by itself.

Does not appear possible to get more debug information for why slave has crashed

I apologise if there is a documentation option I missed - I've not been able to find any option to indicate this.

I am running pytest with xdist on Windows - python 2.7.11 from Anaconda with the following versions:

pytest                    2.9.1                    py27_0    
pytest-xdist              1.13.1                   py27_0    

When I run using more than 1 process for example:

py.test -v -r a --tb=long -n 3 -m regression --max-slave-restart=0 [redacted]

I get some tests that crash the slaves, but I cannot see what causes the crash. If I repeat the above command line several times, different tests cause slave crashes. The same tests do not crash when run without -n 3. I guess it has something to do with my tests, but I don't have any information with which to debug.

Sample output:

================================== FAILURES ===================================
โ†[1mโ†[31m_ redacted.py __โ†[0m
[gw0] win32 -- Python 2.7.11 C:\dev\bin\Anaconda\python.exe
Slave 'gw0' crashed while running 'redacted'
โ†[1mโ†[31m_ redacted.py __โ†[0m
[gw2] win32 -- Python 2.7.11 C:\dev\bin\Anaconda\python.exe
Slave 'gw2' crashed while running 'redacted'
โ†[1mโ†[31m==================== 2 failed, 11 passed in 23.14 seconds =====================โ†[0m

Cannot run with -p no:tmpdir option

When I use xdist option with -p no:tmpdir also set, it causes AttributeError: 'Config' object has no attribute '_tmpdirhandler':

$ py.test -n auto -p no:tmpdir test.py
================================================= test session starts ==================================================
platform darwin -- Python 2.7.9, pytest-2.8.3, py-1.4.31, pluggy-0.3.1
django settings: proj.settings.local (from environment variable)
rootdir: /Users/khj/works/projrepo/proj, inifile: pytest.ini
plugins: cov-2.1.0, django-2.9.1, xdist-1.13.1
gw0 C / gw1 I / gw2 I / gw3 I / gw4 I / gw5 I / gw6 I / gw7 IINTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "/Users/khj/.pyenv/versions/proj/lib/python2.7/site-packages/_pytest/main.py", line 88, in wrap_session
INTERNALERROR>     config.hook.pytest_sessionstart(session=session)
INTERNALERROR>   File "/Users/khj/.pyenv/versions/proj/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR>   File "/Users/khj/.pyenv/versions/proj/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "/Users/khj/.pyenv/versions/proj/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR>   File "/Users/khj/.pyenv/versions/proj/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "/Users/khj/.pyenv/versions/proj/lib/python2.7/site-packages/xdist/dsession.py", line 494, in pytest_sessionstart
INTERNALERROR>     nodes = self.nodemanager.setup_nodes(putevent=self.queue.put)
INTERNALERROR>   File "/Users/khj/.pyenv/versions/proj/lib/python2.7/site-packages/xdist/slavemanage.py", line 45, in setup_nodes
INTERNALERROR>     nodes.append(self.setup_node(spec, putevent))
INTERNALERROR>   File "/Users/khj/.pyenv/versions/proj/lib/python2.7/site-packages/xdist/slavemanage.py", line 54, in setup_node
INTERNALERROR>     node.setup()
INTERNALERROR>   File "/Users/khj/.pyenv/versions/proj/lib/python2.7/site-packages/xdist/slavemanage.py", line 216, in setup
INTERNALERROR>     basetemp = self.config._tmpdirhandler.getbasetemp()
INTERNALERROR> AttributeError: 'Config' object has no attribute '_tmpdirhandler'

System info:

$ sw_vers
ProductName:    Mac OS X
ProductVersion: 10.11
BuildVersion:   15A284
$ pip freeze | grep -i pytest
pytest==2.8.3
pytest-cov==2.1.0
pytest-django==2.9.1
pytest-xdist==1.13.1

Final adjustments for finishing GitHub import

Hooray for migrating the repository to GitHub! ๐ŸŽˆ

I think there are a few steps to finish the migration though:

  • Change the README at BitBucket to point to the new repository location
  • Disable issues and Wiki

xdist chooses the wrong package version

When installing two different versions of a package in the same environment, xdist doesn't pick the same version as regular pytest.


Steps to reproduce using Invoke package for instance:

virtualenv ENV && source ENV/bin/activate
pip install pytest pytest-xdist invoke==0.11.0

Download the source of Invoke 0.12.2 and install it with python setup.py install.

I have a dummy test file test_version.py, containing:

import invoke

def test_version():
    assert False, str(invoke.__version__)

When running py.test test_version.py, we see that the test is running with Invoke 0.12.2:

============================ test session starts =============================
platform darwin -- Python 2.7.10, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: /Users/jcruy/Desktop/xdist_bug, inifile: 
plugins: xdist-1.14
collected 1 items 

test_version.py F

================================== FAILURES ==================================
________________________________ test_version ________________________________

    def test_version():
>       assert False, str(invoke.__version__)
E       AssertionError: 0.12.2
E       assert False

test_version.py:5: AssertionError
========================== 1 failed in 0.09 seconds ==========================

However, when running it with xdist (py.test test_version.py -n 2), we see that the test is running with Invoke 0.11.0:

============================ test session starts =============================
platform darwin -- Python 2.7.10, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: /Users/jcruy/Desktop/xdist_bug, inifile: 
plugins: xdist-1.14
gw0 [1] / gw1 [1]
scheduling tests via LoadScheduling
F
================================== FAILURES ==================================
________________________________ test_version ________________________________
[gw1] darwin -- Python 2.7.10 /Users/jcruy/Desktop/xdist_bug/ENV/bin/python
def test_version():
>       assert False, str(invoke.__version__)
E       AssertionError: 0.11.0
E       assert False

test_version.py:5: AssertionError
========================== 1 failed in 0.45 seconds ==========================

Notes:

  • I chose the Invoke package for example's sake only, it happens with other packages as well.
  • I know that it's not a really good thing to install two versions of a packages in the same environment, but I believe that the behaviour between pytest and xdist should at least be consistent.
  • I would gladly contribute to a fix, if you point me to the area of the code which might be responsible for that.

Environment:

  • OS X 10.11.4
  • Python 2.7.10
  • pytest 2.9.2
  • pytest-xdist 1.14

looponfail output issues w/ looponfailroots

when using xdist "-f, --looponfail" and large number of "looponfailroots" folders is set, output in shell is almost useless because you need to scroll each time something changes.

example output, some content removed:

### LOOPONFAILING ###
unit/test_...
### waiting for changes ###
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...
### Watching:   ...

a somewhat expected response would be to set less roots and rearrange project or that i'm doing something wrong.
here is a simple example why this is irrelevant:
a project using python and javascript can have various workers generating data and logs which all trigger looponfail.

possible solutions:

  • allow regex(s) to specify which files/folders should be watched
  • display "Watching ..." for looponfailroots only once when "py.test -f" starts

invoke setUpClass() many times under running multiple cpu mode

os: win7
py:2.7 && pytest (2.8.2) && pytest-xdist (1.13.1)

import unittest

class SomeTest(unittest.TestCase):

    def test_unit1(self):
        pass

    def test_unit2(self):
        pass

    def test_unit3(self):
        pass

    def test_unit4(self):
        pass                        

    def test_unit4(self):
        pass         

    def test_unit5(self):
        pass     

    @classmethod
    def setUpClass(cls):
        with open('c:/abc.txt', 'a') as f :
            f.write('######\n')

command๏ผš

py.test testme.py   -v -n 3

content of c:/abc.txt:

######
######
######

xdist can not guarantee setUpClass() only run once ?

-n auto and tox on windows does not work

multiprocessing.cpu_count() uses int(os.environ['NUMBER_OF_PROCESSORS']) to get the number of CPUs. I guess this does not work as tox cleans all the environment variables. Thus I suggest using another approach. There are several listed in http://stackoverflow.com/questions/1006289/how-to-find-out-the-number-of-cpus-using-python.

File "C:\Python27\Lib\runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "C:\Python27\Lib\runpy.py", line 72, in _run_code
    exec code in run_globals
  File "c:\windows\temp\devpi-test829\ngc-tsg-1.17.0.3687097\.tox\python\Scripts\py.test.EXE\__main__.py", line 9, in <module>
  File "c:\windows\temp\devpi-test829\ngc-tsg-1.17.0.3687097\.tox\python\lib\site-packages\_pytest\config.py", line 39, in main
    config = _prepareconfig(args, plugins)
  File "c:\windows\temp\devpi-test829\ngc-tsg-1.17.0.3687097\.tox\python\lib\site-packages\_pytest\config.py", line 118, in _prepareconfig
    pluginmanager=pluginmanager, args=args)
  File "c:\windows\temp\devpi-test829\ngc-tsg-1.17.0.3687097\.tox\python\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 724, in __call__
    return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
  File "c:\windows\temp\devpi-test829\ngc-tsg-1.17.0.3687097\.tox\python\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 338, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "c:\windows\temp\devpi-test829\ngc-tsg-1.17.0.3687097\.tox\python\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 333, in <lambda>
    _MultiCall(methods, kwargs, hook.spec_opts).execute()
  File "c:\windows\temp\devpi-test829\ngc-tsg-1.17.0.3687097\.tox\python\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 595, in execute
    return _wrapped_call(hook_impl.function(*args), self.execute)
  File "c:\windows\temp\devpi-test829\ngc-tsg-1.17.0.3687097\.tox\python\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 249, in _wrapped_call
    wrap_controller.send(call_outcome)
  File "c:\windows\temp\devpi-test829\ngc-tsg-1.17.0.3687097\.tox\python\lib\site-packages\_pytest\helpconfig.py", line 28, in pytest_cmdline_parse
    config = outcome.get_result()
  File "c:\windows\temp\devpi-test829\ngc-tsg-1.17.0.3687097\.tox\python\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 279, in get_result
    _reraise(*ex)  # noqa
  File "c:\windows\temp\devpi-test829\ngc-tsg-1.17.0.3687097\.tox\python\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 264, in __init__
    self.result = func()
  File "c:\windows\temp\devpi-test829\ngc-tsg-1.17.0.3687097\.tox\python\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 596, in execute
    res = hook_impl.function(*args)
  File "c:\windows\temp\devpi-test829\ngc-tsg-1.17.0.3687097\.tox\python\lib\site-packages\_pytest\config.py", line 856, in pytest_cmdline_parse
    self.parse(args)
  File "c:\windows\temp\devpi-test829\ngc-tsg-1.17.0.3687097\.tox\python\lib\site-packages\_pytest\config.py", line 961, in parse
    self._preparse(args, addopts=addopts)
  File "c:\windows\temp\devpi-test829\ngc-tsg-1.17.0.3687097\.tox\python\lib\site-packages\_pytest\config.py", line 926, in _preparse
    self.known_args_namespace = ns = self._parser.parse_known_args(args, namespace=self.option.copy())
  File "c:\windows\temp\devpi-test829\ngc-tsg-1.17.0.3687097\.tox\python\lib\site-packages\_pytest\config.py", line 491, in parse_known_args
    return self.parse_known_and_unknown_args(args, namespace=namespace)[0]
  File "c:\windows\temp\devpi-test829\ngc-tsg-1.17.0.3687097\.tox\python\lib\site-packages\_pytest\config.py", line 499, in parse_known_and_unknown_args
    return optparser.parse_known_args(args, namespace=namespace)
  File "C:\Python27\Lib\argparse.py", line 1720, in parse_known_args
    namespace, args = self._parse_known_args(args, namespace)
  File "C:\Python27\Lib\argparse.py", line 1926, in _parse_known_args
    start_index = consume_optional(start_index)
  File "C:\Python27\Lib\argparse.py", line 1866, in consume_optional
    take_action(action, args, option_string)
  File "C:\Python27\Lib\argparse.py", line 1778, in take_action
    argument_values = self._get_values(action, argument_strings)
  File "C:\Python27\Lib\argparse.py", line 2204, in _get_values
    value = self._get_value(action, arg_string)
  File "C:\Python27\Lib\argparse.py", line 2233, in _get_value
    result = type_func(arg_string)
  File "c:\windows\temp\devpi-test829\ngc-tsg-1.17.0.3687097\.tox\python\lib\site-packages\xdist\plugin.py", line 8, in parse_numprocesses
    return multiprocessing.cpu_count()
  File "C:\Python27\Lib\multiprocessing\__init__.py", line 136, in cpu_count
    raise NotImplementedError('cannot determine number of cpus')
NotImplementedError: cannot determine number of cpus

allow getting process number during test

Hi, we have a limited number of test accounts with which to run our tests. We'd like to run our tests in parallel with xdist, but can't use the same account in different tests at the same time. We can choose which account to use in a fixture, but need to know which worker is running to determine which account to use.

So is there a way to get the worker number? I see output like

gw0 I / gw1 I
gw0 [53] / gw1 [53]

when I start the test session with two workers. So can I know if I'm gw0 or gw1 from within a fixture?

--looponfail doesn't work with many processes

When I invoke py.test like this: py.test -n 2 --looponfail I get the following error:

INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "/Users/omer.katz/.virtualenvs/drifter-api/site-packages/_pytest/main.py", line 88, in wrap_session
INTERNALERROR>     config.hook.pytest_sessionstart(session=session)
INTERNALERROR>   File "/Users/omer.katz/.virtualenvs/drifter-api/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR>   File "/Users/omer.katz/.virtualenvs/drifter-api/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec

INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "/Users/omer.katz/.virtualenvs/drifter-api/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR>   File "/Users/omer.katz/.virtualenvs/drifter-api/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "/Users/omer.katz/.virtualenvs/drifter-api/site-packages/xdist/dsession.py", line 494, in pytest_sessionstart
INTERNALERROR>     nodes = self.nodemanager.setup_nodes(putevent=self.queue.put)
INTERNALERROR>   File "/Users/omer.katz/.virtualenvs/drifter-api/site-packages/xdist/slavemanage.py", line 41, in setup_nodes
INTERNALERROR>     specs=self.specs)
INTERNALERROR>   File "/Users/omer.katz/.virtualenvs/drifter-api/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR>   File "/Users/omer.katz/.virtualenvs/drifter-api/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "/Users/omer.katz/.virtualenvs/drifter-api/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR> AttributeError: '_HookCaller' object has no attribute 'spec_opts'

It does trigger the error again when a file changes.

pytest-xdist changes verbosity of pytest automaically

Normally if I run pytest I get:

coalib/tests/MakeTempTest.py ....

With pytest -v I get:

coalib/tests/MakeTempTest.py::MakeTempTest::test_temp_file_existence PASSED

With pytest -q I get:

....

When i run pytest -n 2 it automatically changes the -q type of output. How do I get the output like

coalib/tests/MakeTempTest.py ....

with pytest-xdist ?

pytest hangs before running tests

Hi all,

I'm running "py.test -k test_authenticated_user_cannot_index_ -n2 --max-slave-restart=0" hangs before running any tests.

The process tree shows a waiting STDIN for each of the workers:

$ pstree -hap 15994
py.test,15994 /root/.local/bin/py.test -k test_authenticated_user_cannot_index_ -n2 --max-slave-restart=0
  โ”œโ”€python3.5,16010 -u -c import sys;exec(eval(sys.stdin.readline()))
  โ”‚   โ””โ”€{python3.5},16011
  โ””โ”€python3.5,16012 -u -c import sys;exec(eval(sys.stdin.readline()))
      โ””โ”€{python3.5},16013

The output until the hanging point shows:

============================= test session starts ==============================
platform linux -- Python 3.5.0, pytest-2.9.1, py-1.4.31, pluggy-0.3.1
rootdir: /lws, inifile: setup.cfg
plugins: nameko-2.2.0, cov-2.2.1, rerunfailures-2.0.0, flakes-1.0.1, pep8-1.0.6, xdist-1.14
gw0 I / gw1 I

The traceback after hitting Ctrl-C adds:

Traceback (most recent call last):
  File "/root/.local/lib/python3.5/site-packages/eventlet/hubs/hub.py", line 457, in fire_timers
    timer()
  File "/root/.local/lib/python3.5/site-packages/eventlet/hubs/timer.py", line 58, in __call__
    cb(*args, **kw)
  File "/root/.local/lib/python3.5/site-packages/eventlet/semaphore.py", line 145, in _do_acquire
    waiter.switch()
greenlet.error: cannot switch to a different thread
[gw0] node down: Not properly terminated
Slave restarting disabled
gw0 [1] / gw1 [2434]

scheduling tests via LoadScheduling
[gw1] node down: keyboard-interrupt
Slave restarting disabled

==================================== ERRORS ====================================
_____________________________ ERROR collecting gw0 _____________________________
Different tests were collected between gw1 and gw0. The difference is:
--- gw1

+++ gw0

@@ -1,2434 +1 @@
...
all of the test names
...

 xdist.dsession.Interrupted: <SlaveController gw1> received keyboard-interrupt !
=========================== 1 error in 34.28 seconds ===========================

Versions and plugins:

flaky==3.1.1
py==1.4.31
pytest==2.9.1
pytest-flakes==1.0.1
pytest-pep8==1.0.6
pytest-cov==2.2.1
pytest-rerunfailures==2.0.0
coverage==4.0.3

Please let me know if there's any other info I could collect.

Thanks!

Yannis

xdist bails out too soon when using --exitfirst

While investigating pytest-dev/pytest#442 I ran into an issue where xdist behaves differently than regular pytest.

Basically, it seems that when using --exitfirst or --maxfail=X, in combination with -n X, pytest (DSession?) bails out before collecting the teardown report, so the pytest_runtest_logreport doesn't get run.

At first glance, it seems like the the plugin should flush any unread entries from the queue after telling the nodes to stop.

I'm willing to work on this if someone can verify that this makes sense and/or point me in the right direction :)

Virtualized or containerized environments might not report number of CPUs correctly

This problem is mainly present itself in CI environments like the ones Travis provides.
When you use -n auto xdist will create more workers than what's available for the container/VM since that is what's written in the /proc/ filesystem which multiprocessing.cpu_count() reads the number of CPUs from.
A possible solution would be to use psutil which might be able to read the /proc/ filesystem more correctly but I'm not 100% sure about it.

As a workaround I'd like to suggest adding:

if os.environ.get('TRAVIS') == 'true' and n == 'auto':
  cpu_count = 2

to the code to prevent spawning too many processes and slowing down test runs.
A better workaround would be to provide a hook to allow to reinterpret what auto means and return a different number.

eventlet support

It seems pytest-xdist does not play nicely with eventlet.

Self contained example:

pip install eventlet==0.19.0 pytest_xdist==1.14 pytest
cat > blah.py <<EOF
import eventlet
eventlet.monkey_patch()
def f(x):
    return x + 1
EOF

cat > test_blah.py <<EOF
import blah
def test_stuff():
    assert blah.f(1) == 2
EOF

py.test -n2
# hangs forever after printing "scheduling tests via LoadScheduling"

pytest-xdist server side timeout

NOTE: This is not about timeout for test code itself (pytest-timeout works well here), this is about need for timeout in pytest-xdist.

First, let me say big thank you for pytest and pytest-xdist. We use it to run ~400 Docker containers on ~10 servers on AWS. It works wonders!

There are scenarios where pytest-xdist does not detect remote session crash or disconnect and as such will wait for results forever.

Today's xdist code detects session crash via EOF on the SSH session. When network connection is torn down, server marks the worker as dead, and re-adds it. All good.

But... consider a scenario where the SSH is not torn down:

  1. Run N tests on multiple remote machines with pytest-xdist,
  2. Tests spawn a python process on remote machine via SSH
  3. We run in boxed mode, so this process forks to run actual test code
  4. Process #2 gets killed or crashes
  5. SSH session stays up because process #3 inherited at least one stdin/out/err from the process #2 (standard SSH behavior).

In this case, the server side xdist thinks the session is up and is waiting for the results for really, really long time ;-)

And yes, #2 does not crash normally. In our case it was oom killed quite persistently. All it takes is 1 oom kill for tens of thousands of tests and entire batch is ruined.

Please let me know if I can provide more info on this issue.

[root@nsth-c10 nsth] #.python --version
Python 2.7.10
[root@nsth-c10 nsth] #.py.test --version
This is pytest version 2.8.0, imported from /usr/local/lib/python2.7/site-packages/pytest-2.8.0-py2.7.egg/pytest.pyc
setuptools registered plugins:
pytest-xdist-1.13.1 at /usr/local/lib/python2.7/site-packages/pytest_xdist-1.13.1-py2.7.egg/xdist/boxed.pyc
pytest-xdist-1.13.1 at /usr/local/lib/python2.7/site-packages/pytest_xdist-1.13.1-py2.7.egg/xdist/looponfail.pyc
pytest-xdist-1.13.1 at /usr/local/lib/python2.7/site-packages/pytest_xdist-1.13.1-py2.7.egg/xdist/plugin.pyc
[root@nsth-c10 nsth] #.

P.S.
Moved from pytest-dev/pytest#1550

Proposal: control test distribution in pytest-xdist

Here's a proposal on how to control how tests are distributed among slaves in pytest-xdist in the --dist=load mode, feature that has been requested by a lot of people in pytest-dev/pytest#175.

Note: the names here are just an initial thought, suggestions are more than welcome.

Scenarios

People have requested a number of scenarios. Here I will summarize the more popular ones and show a proposed API to support each one.

Overview

pytest-xdist in --dist=load will split tests over all slaves, where each test will run in exactly one slave distributing the test load across multiple processes and/or remotes.

Serial execution of tests of the same Class

The most popular scenario by far, people would like to somehow mark a class or module so all tests from that class/module would execute in the same slave.

import pytest

@pytest.mark.xdist_same_slave
class TestFoo:

    def test_1(self): 
        pass
    def test_2(self): 
        pass

@pytest.mark.xdist_same_slave
class TestBar:

    def test_3(self): 
        pass
    def test_4(self): 
        pass

The @pytest.mark.xdist_same_slave decorator when applied to a class will instruct xdist to run all tests of that class in the same slave. Which slave will be assigned to run tests from each class will be determined at runtime in a round-robin fashion, so as test items from classes with this decorator ar scheduled for execution, they will be assigned to execute on the same node, but different classes will have their tests assigned to different nodes. In the example above with -n 2 pytest-xdist will executes test from TestFoo in slave 0 and tests from TestBar in slave 1. The actual slave numbers are not guaranteed to be the same between runs and will be determined at runtime, with xdist doing its best to distribute tests to available slaves while keeping the constraint of binding marked tests to run in the same slave.

As tests execute in the same slave, their execution will be effectively serial.

Serial execution of tests of the same Module

Similar to the above, but will serialize all tests in a module, both contained in test classes or free functions:

import pytest

pytestmark = pytest.mark.xdist_same_slave

class Test:

    def test_1(self): 
        pass
    def test_2(self): 
        pass

def test_a():
    pass

Serial execution of tests which use a fixture

import xdist, pytest

@xdist.xdist_same_slave
@pytest.fixture(scope='session')
def db_session():
    return DBSession()

@pytest.yield_fixture
def db(db_session):
    db_session.begin_transaction()
    yield 
    db_session.end_transaction()

This will make all tests which use db execute in the same slave (which one exactly to be determined at runtime) because db depends on db_session, which is bound to execute in a single slave. This will effectively setup the database in a single slave once, and all tests which use db will run in that slave.

Unfortunately we can't use pytest.mark with fixtures, so we have to rely on a normal decorator.


Opinions and suggestions are more than welcome!

Pytest ignores '.ini' file in subdirectory when run with '-n'

I've noticed some strange behaviour with regard to pytest.ini files when those files are in a subdirectory. When I run py.test with -n (with any value, including '0'), the pytest.ini in the subdirectory is completely ignored. The following shell session replicates the error:

$ cd $(mktemp -d)
$ mkdir tests
$ ( cd tests && wget --quiet https://gist.github.com/allanlewis/3429d6dfd15738eabae0/raw/9a1cdd27d56e32d6dfb33013e7f67fcb58c3284b/pytest.ini https://gist.githubusercontent.com/allanlewis/3429d6dfd15738eabae0/raw/de09d81d723c898fa9b7d54b24b370725dab1464/test_things.py )
$ cat tests/test_things.py
import pytest
@pytest.mark.onlythis
def test_should_run():
    pass
def test_should_not_run():
    pass
$ cat tests/pytest.ini 
[pytest]
addopts = -m "onlythis"
$ py.test -v tests
========================================= test session starts =========================================
platform linux2 -- Python 2.7.8, pytest-2.8.3, py-1.4.30, pluggy-0.3.1 -- /usr/bin/python
cachedir: tests/.cache
rootdir: /tmp/tmp.jMcnQhurH3/tests, inifile: pytest.ini
plugins: xdist-1.13.1, mock-0.8.1, html-1.7
collected 2 items 

tests/test_things.py::test_should_run PASSED

================================ 1 tests deselected by "-m 'onlythis'" ================================
=============================== 1 passed, 1 deselected in 0.00 seconds ================================
$ py.test -v -n auto tests
========================================= test session starts =========================================
platform linux2 -- Python 2.7.8, pytest-2.8.3, py-1.4.30, pluggy-0.3.1 -- /usr/bin/python
cachedir: .cache
rootdir: /tmp/tmp.jMcnQhurH3, inifile: 
plugins: xdist-1.13.1, mock-0.8.1, html-1.7
[gw0] linux2 Python 2.7.8 cwd: /tmp/tmp.jMcnQhurH3
[gw1] linux2 Python 2.7.8 cwd: /tmp/tmp.jMcnQhurH3
[gw2] linux2 Python 2.7.8 cwd: /tmp/tmp.jMcnQhurH3
[gw3] linux2 Python 2.7.8 cwd: /tmp/tmp.jMcnQhurH3
[gw1] Python 2.7.8 (default, Sep 24 2015, 18:26:19)  -- [GCC 4.9.2 20150212 (Red Hat 4.9.2-6)]
[gw0] Python 2.7.8 (default, Sep 24 2015, 18:26:19)  -- [GCC 4.9.2 20150212 (Red Hat 4.9.2-6)]
[gw2] Python 2.7.8 (default, Sep 24 2015, 18:26:19)  -- [GCC 4.9.2 20150212 (Red Hat 4.9.2-6)]
[gw3] Python 2.7.8 (default, Sep 24 2015, 18:26:19)  -- [GCC 4.9.2 20150212 (Red Hat 4.9.2-6)]
gw0 [2] / gw1 [2] / gw2 [2] / gw3 [2]
scheduling tests via LoadScheduling

test_things.py::test_should_run 
[gw3] PASSED test_things.py::test_should_run 
test_things.py::test_should_not_run 
[gw3] PASSED test_things.py::test_should_not_run 

====================================== 2 passed in 0.47 seconds =======================================
$ py.test -v -n 0 tests   
========================================= test session starts =========================================
platform linux2 -- Python 2.7.8, pytest-2.8.3, py-1.4.30, pluggy-0.3.1 -- /usr/bin/python
cachedir: .cache
rootdir: /tmp/tmp.jMcnQhurH3, inifile: 
plugins: xdist-1.13.1, mock-0.8.1, html-1.7
collected 2 items 

tests/test_things.py::test_should_run PASSED
tests/test_things.py::test_should_not_run PASSED

====================================== 2 passed in 0.00 seconds =======================================

If I explicitly supply the configuration file, everything's fine:

$ py.test -v -n 0 -c tests/pytest.ini tests 
========================================= test session starts =========================================
platform linux2 -- Python 2.7.8, pytest-2.8.3, py-1.4.30, pluggy-0.3.1 -- /usr/bin/python
cachedir: .cache
rootdir: /tmp/tmp.jMcnQhurH3, inifile: tests/pytest.ini
plugins: xdist-1.13.1, mock-0.8.1, html-1.7
collected 2 items 

tests/test_things.py::test_should_run PASSED

================================ 1 tests deselected by "-m 'onlythis'" ================================
=============================== 1 passed, 1 deselected in 0.01 seconds ================================

Add support for threaded test execution

Threads will help speed up I/O intensive or calculation intensive test suites such as end to end tests that perform HTTP requests, invoke methods from with C extensions etc.
See https://github.com/sindresorhus/ava as a good example for the speed gain.

I discussed this with @RonnyPfannschmidt and it seems that it would require a lot of work and thread safety from pytest itself.
This can be provided as a third party provided that pytest-xdist will provide a public extension API.
pytest-cloud is an example of such extension but as far as I know there's no detailed documentation that explains how to do so.

sheduler statemachines

in order to properly formalize and test sheduling mechanisms as well as correcting the bugs around node restarts we need a formalized set of statemachines that do the master side as well as state machines tha do the slave side of xdist

this also addresses #17 #18 and #19

test_remoteinitconfig failing in pytest-2.9

In Travis, for some reason this works on Windows:

=================================== FAILURES ===================================
____________________________ test_remoteinitconfig _____________________________
testdir = <Testdir local('/tmp/pytest-of-travis/pytest-0/testdir/test_remoteinitconfig0')>
    def test_remoteinitconfig(testdir):
        from xdist.remote import remote_initconfig
        config1 = testdir.parseconfig()
        config2 = remote_initconfig(config1.option.__dict__, config1.args)
>       assert config2.option.__dict__ == config1.option.__dict__
E       assert {'assertmode'...': False, ...} == {'assertmode':...': False, ...}
E         Omitting 56 identical items, use -v to show
E         Differing items:
E         {'file_or_dir': ['/tmp/pytest-of-travis/pytest-0/testdir/test_remoteinitconfig0']} != {'file_or_dir': []}
E         Use -v to get the full diff
/home/travis/build/pytest-dev/pytest-xdist/testing/test_remote.py:69: AssertionError
==== 1 failed, 93 passed, 4 skipped, 9 xfailed, 1 xpassed in 34.51 seconds =====
ERROR: InvocationError: '/home/travis/build/pytest-dev/pytest-xdist/.tox/py34-pytest29/bin/py.test'

Doesn't work with pytest-runner

I have the following setup.py file:

from setuptools import setup, find_packages

setup(
    name="mypkg",
    package_dir={'': 'src'},
    packages=find_packages('src'),
    version="0.1",
    install_requires=["pytest-xdist", "toolz>=0.7.4", "boto", "pyrfc3339", "pytz", "mock"],
    setup_requires=["pytest-runner"],
    dependency_links=[],
    include_package_data=True,
    zip_safe=False)

And the following package layout:

.
โ”œโ”€โ”€ pytest.ini
โ”œโ”€โ”€ setup.py
โ”œโ”€โ”€ src
โ”‚ย ย  โ””โ”€โ”€ mypkg
โ””โ”€โ”€ tests
    โ”œโ”€โ”€ __init__.py
    โ””โ”€โ”€ test_misc.py

Using pytest-runner, this works fine:

python setup.py pytest

But if I add the -f switch to run with pytest-xdist, it fails on dependecy resolution:

python setup.py pytest --addopts=-f                                                                                               -- INSERT --
running pytest
Searching for mock
Best match: mock 1.3.0
Processing mock-1.3.0-py2.7.egg

Using /Users/henriquealves/Code/nu/common-core-python/.eggs/mock-1.3.0-py2.7.egg
Searching for pytz
Best match: pytz 2015.7
Processing pytz-2015.7-py2.7.egg

Using /Users/henriquealves/Code/nu/common-core-python/.eggs/pytz-2015.7-py2.7.egg
Searching for pyrfc3339
Best match: pyRFC3339 1.0
Processing pyRFC3339-1.0-py2.7.egg

Using /Users/henriquealves/Code/nu/common-core-python/.eggs/pyRFC3339-1.0-py2.7.egg
Searching for boto
Best match: boto 2.39.0
Processing boto-2.39.0-py2.7.egg

Using /Users/henriquealves/Code/nu/common-core-python/.eggs/boto-2.39.0-py2.7.egg
Searching for toolz>=0.7.4
Best match: toolz 0.7.4
Processing toolz-0.7.4-py2.7.egg

Using /Users/henriquealves/Code/nu/common-core-python/.eggs/toolz-0.7.4-py2.7.egg
Searching for pytest-xdist
Best match: pytest-xdist 1.13.1
Processing pytest_xdist-1.13.1-py2.7.egg

Using /Users/henriquealves/Code/nu/common-core-python/.eggs/pytest_xdist-1.13.1-py2.7.egg
Searching for funcsigs
Best match: funcsigs 0.4
Processing funcsigs-0.4-py2.7.egg

Using /Users/henriquealves/Code/nu/common-core-python/.eggs/funcsigs-0.4-py2.7.egg
Searching for six>=1.7
Best match: six 1.10.0
Processing six-1.10.0-py2.7.egg

Using /Users/henriquealves/Code/nu/common-core-python/.eggs/six-1.10.0-py2.7.egg
Searching for pbr>=0.11
Best match: pbr 1.8.1
Processing pbr-1.8.1-py2.7.egg

Using /Users/henriquealves/Code/nu/common-core-python/.eggs/pbr-1.8.1-py2.7.egg
Searching for py>=1.4.22
Best match: py 1.4.31
Processing py-1.4.31-py2.7.egg

Using /Users/henriquealves/Code/nu/common-core-python/.eggs/py-1.4.31-py2.7.egg
Searching for pytest>=2.4.2
Best match: pytest 2.8.7
Processing pytest-2.8.7-py2.7.egg

Using /Users/henriquealves/Code/nu/common-core-python/.eggs/pytest-2.8.7-py2.7.egg
Searching for execnet>=1.1
Best match: execnet 1.4.1
Processing execnet-1.4.1-py2.7.egg

Using /Users/henriquealves/Code/nu/common-core-python/.eggs/execnet-1.4.1-py2.7.egg
Searching for apipkg>=1.4
Best match: apipkg 1.4
Processing apipkg-1.4-py2.7.egg

Using /Users/henriquealves/Code/nu/common-core-python/.eggs/apipkg-1.4-py2.7.egg
running egg_info
writing requirements to src/common_core_python.egg-info/requires.txt
writing src/common_core_python.egg-info/PKG-INFO
writing top-level names to src/common_core_python.egg-info/top_level.txt
writing dependency_links to src/common_core_python.egg-info/dependency_links.txt
reading manifest file 'src/common_core_python.egg-info/SOURCES.txt'
writing manifest file 'src/common_core_python.egg-info/SOURCES.txt'
running build_ext
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "<string>", line 3, in <module>
  File "/Users/henriquealves/Code/nu/common-core-python/.eggs/execnet-1.4.1-py2.7.egg/execnet/__init__.py", line 9, in <module>
    import apipkg
ImportError: No module named apipkg
Traceback (most recent call last):
  File "setup.py", line 12, in <module>
    zip_safe=False)
  File "/Users/henriquealves/anaconda/envs/blank/lib/python2.7/distutils/core.py", line 151, in setup
    dist.run_commands()
  File "/Users/henriquealves/anaconda/envs/blank/lib/python2.7/distutils/dist.py", line 953, in run_commands
    self.run_command(cmd)
  File "/Users/henriquealves/anaconda/envs/blank/lib/python2.7/distutils/dist.py", line 972, in run_command
    cmd_obj.run()
  File "build/bdist.macosx-10.5-x86_64/egg/ptr.py", line 79, in run
  File "/Users/henriquealves/anaconda/envs/blank/lib/python2.7/site-packages/setuptools-19.4-py2.7.egg/setuptools/command/test.py", line 140, in with_project_on_sys_path
  File "build/bdist.macosx-10.5-x86_64/egg/ptr.py", line 124, in run_tests
  File "/Users/henriquealves/Code/nu/common-core-python/.eggs/pytest-2.8.7-py2.7.egg/_pytest/config.py", line 48, in main
    return config.hook.pytest_cmdline_main(config=config)
  File "/Users/henriquealves/Code/nu/common-core-python/.eggs/pytest-2.8.7-py2.7.egg/_pytest/vendored_packages/pluggy.py", line 724, in __call__
    return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
  File "/Users/henriquealves/Code/nu/common-core-python/.eggs/pytest-2.8.7-py2.7.egg/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "/Users/henriquealves/Code/nu/common-core-python/.eggs/pytest-2.8.7-py2.7.egg/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
    _MultiCall(methods, kwargs, hook.spec_opts).execute()
  File "/Users/henriquealves/Code/nu/common-core-python/.eggs/pytest-2.8.7-py2.7.egg/_pytest/vendored_packages/pluggy.py", line 596, in execute
    res = hook_impl.function(*args)
  File "/Users/henriquealves/Code/nu/common-core-python/.eggs/pytest_xdist-1.13.1-py2.7.egg/xdist/looponfail.py", line 24, in pytest_cmdline_main
    looponfail_main(config)
  File "/Users/henriquealves/Code/nu/common-core-python/.eggs/pytest_xdist-1.13.1-py2.7.egg/xdist/looponfail.py", line 35, in looponfail_main
    remotecontrol.loop_once()
  File "/Users/henriquealves/Code/nu/common-core-python/.eggs/pytest_xdist-1.13.1-py2.7.egg/xdist/looponfail.py", line 100, in loop_once
    self.setup()
  File "/Users/henriquealves/Code/nu/common-core-python/.eggs/pytest_xdist-1.13.1-py2.7.egg/xdist/looponfail.py", line 64, in setup
    self.gateway = self.initgateway()
  File "/Users/henriquealves/Code/nu/common-core-python/.eggs/pytest_xdist-1.13.1-py2.7.egg/xdist/looponfail.py", line 56, in initgateway
    return execnet.makegateway("popen")
  File "/Users/henriquealves/Code/nu/common-core-python/.eggs/execnet-1.4.1-py2.7.egg/execnet/multi.py", line 128, in makegateway
    gw = gateway_bootstrap.bootstrap(io, spec)
  File "/Users/henriquealves/Code/nu/common-core-python/.eggs/execnet-1.4.1-py2.7.egg/execnet/gateway_bootstrap.py", line 92, in bootstrap
    bootstrap_import(io, spec)
  File "/Users/henriquealves/Code/nu/common-core-python/.eggs/execnet-1.4.1-py2.7.egg/execnet/gateway_bootstrap.py", line 27, in bootstrap_import
    s = io.read(1)
  File "/Users/henriquealves/Code/nu/common-core-python/.eggs/execnet-1.4.1-py2.7.egg/execnet/gateway_base.py", line 389, in read
    "expected %d bytes, got %d" % (numbytes, len(buf)))
EOFError: expected 1 bytes, got 0

Here are the versions in .eggs:

README.txt          execnet-1.4.1-py2.7.egg     pbr-1.8.1-py2.7.egg     pytest-2.8.7-py2.7.egg      pytz-2015.7-py2.7.egg
apipkg-1.4-py2.7.egg        funcsigs-0.4-py2.7.egg      py-1.4.31-py2.7.egg     pytest_runner-2.6.2-py2.7.egg   six-1.10.0-py2.7.egg
boto-2.39.0-py2.7.egg       mock-1.3.0-py2.7.egg        pyRFC3339-1.0-py2.7.egg     pytest_xdist-1.13.1-py2.7.egg   toolz-0.7.4-py2.7.egg

pytest-xdist hangs if max-slave-restart is set and the restart limit is reached

It can be observed in this build:
https://travis-ci.org/ClearcodeHQ/pytest-dbfixtures/jobs/108118580

I'm testing if a package is configured properly (all non py files are included, hence change to install) and installing the package with "pip install ." before the test.
pytest_load_initial_conftests throws a ValidationError if the additional files can not be accessed. however I end up with a loop of restarting slaves indefinitely, or if I set --max-slave-restart, I end up with hanging py.test as well.

Poor interaction between `-n#` and `-c X.cfg`

Specifying e.g. -n8 appears to cause py.test options in config files e.g.

[pytest]
python_files = *_test.py

to be ignored when they're specified by invocations of the form py.test -c /something/elsewhere/setup.cfg -n8. This doesn't appear to be a problem when invoking py.test directly in a setup.py shim s.t. the setup.cfg is automagically found by py.test.

I've only stared at this for a little while, however, and may have missed a few things. Pointers?

cc @nathanielmanistaatgoogle

Using xdist with libfaketime

Hi,

We would like to use python-libfaketime (a wrapper around the excellent libfaketime). It works by re-executing the interpreter with LD_PRELOAD environment variable pointing to libfaketime.so file. This works in a single-threaded pytest run, but it fails with xdist - after some digging it seems that LD_PRELOAD is not passed along to the child process. Using --dist=load --tx 2*popen//env:LD_PRELOAD=/path/to/libfaketime.so.1 does not seem to help.

We've tried to run libfaketime.reexec_if_needed in pytest_configure and pytest_configure_node, but they are both executed in the parent process - thus useless. Forcing libfaketime.reexec_if_needed in the child produces a loop, since it re-execs the parent command: py.test - collect tests - initialize nodes - reexec py.test - .... As well as some other attempts to make LD_PRELOAD reach child process - all without success.

To reproduce:


files:

# assuming some sort of venv
$ pip install pytest pytest-xdist libfaketime
$ mkdir libfaketime_xdist
$ cd libfaketime_xdist
$ touch conftest.py
$ touch test_issue.py

conftest.py:

from libfaketime import reexec_if_needed

def pytest_configure():
    reexec_if_needed()

test_issue.py:

import datetime
from libfaketime import fake_time

def test_faketime():
    with fake_time("2016-05-05 00:00:00"):
        assert datetime.datetime.now() == datetime.datetime(2016, 5, 5, 0, 0, 0)

execute:

$ py.test  # succeeds
$ py.test -n 2  # fails

Any tips on how to approach this would be very welcome.

Use fixtures to better distribute tests via load scheduling

xdist currently supports 2 ways of distributing tests across a number of workers:

  1. "Each" Scheduling: Given a set of tests, "each" scheduling sends each test in the set to each available worker.
  2. Load Scheduling: Given a set of tests, load scheduling distributes the tests among the available workers. Each test runs only once, and xdist tries to give each worker the same amount of work.

Problems with Load Scheduling's current implementation

The current load scheduling implementation distributes tests naively across the workers. Often this means that two tests which depend on the same fixture get assigned to different runners, and the fixture has to be created on each runner.

This is a problem. Fixtures often capture expensive operations. When multiple tests depend on the same fixture, the author typically expects the expensive operation represented by that fixture to happen only once and be reused by all dependent tests. When tests that depend on the same fixture are sent to different workers, that expensive operation is executed multiple times. This is wasteful and can add significantly to the overall testing runtime.

Our goal should be to reuse the existing pytest concept of the fixture to better distribute tests and reduce overall testing time, preferably without adding new options or APIs. This benefits the most users and strengthens pytest's declarative style.

Proposed solution

We can solve this problem in 3 phases:

  1. Let's formalize the concept of a "test chunk" or "test block". A test chunk is a group of tests that always execute on the same worker. This is an internal xdist abstraction that the user normally doesn't have to know about.

    The master will only send complete test chunks to workers, not individual tests. Initially, each test will be assigned to its own chunk, so this won't change xdist's behavior at first. But it will pave the way for us to chunk tests by their attributes, like what fixtures they depend on.

    Once we have this internal abstraction, we can optionally also expose a hook that lets users define their own chunking algorithm to replace the initial default of "1 test -> 1 chunk". The hook won't be very useful until more information about each test is made available, which brings us to the next phase.

  2. We need to pass in additional information to the xdist master about the tests it's running so it can better distribute them. Specifically the master needs to be able to identify unique instances of every fixture that each test depends on. Tests that depend on distinct fixture instances can be assigned to different chunks and thus sent to different workers.

    To identify the distinct fixture instances that each test depends on, we need the following pieces of information for each test:

    1. Test name.
    2. For each fixture that the test depends on:
      1. Fixture name.
      2. Fixture scope and "scope instance".
        • e.g. This fixture is module-scoped and this particular instance of the fixture is for module test_a.py.
      3. Fixture parameter inputs, if the fixture is parameterized.
        • e.g. We need to distinguish fixture(request.param=1) and fixture(request.param=2) as separate fixture instances.

    Initially this information won't be used for anything. It will just be made available to the master. At this point, we'll be ready for the final phase.

  3. Using the new information we have about tests, along with the new internal abstraction of a test chunk, we can now chunk tests up by the list of unique fixture instances they depend on.

    Tests that depend on the same instance of a fixture will now always be sent to the same worker. ๐ŸŽ‰

Advantages over other solutions

This approach has two major advantages over other proposals.

  1. This approach adds no new configuration options or public APIs that users have to learn. Everyone using pytest automatically gets a better, arguably even more correct, distribution of their tests to workers without having to do any work.
  2. This approach promotes a declarative style of writing tests over an imperative one. The goal we should strive for is: Capture your ideas correctly, and pytest will figure out the appropriate execution details. In practice, this is probably not always feasible, and the user will want to exercise control in specific cases. But the closer we can stick to this goal the better the pytest user experience will be.

How this approach addresses common use cases

There are several use cases described in the original issue requesting some way to control parallel execution. This is how the approach described here addresses those use cases:

  • Use case: "I want all the tests in a module to go to the same worker."

    Solution: Make all the tests in that module depend on the same module-scoped fixture.

    If the tests don't need to depend on the same fixture, then why do they need to go to the same worker? (The most likely reason is that they are not correctly isolated.)

  • Use case: "I want all the tests in a class to go the same worker."

    Solution: The solution here is the same. If the tests need to go to the same worker, then they should depend on a common fixture.

    To cleanly address this case, we may need to implement the concept of a class-scoped fixture. (Fixtures can currently be scoped only by function, module, or session.) Otherwise, the tests can depend on a common, module-scoped fixture and achieve the same result.

  • Use case: "I want all the tests in X to go to the same worker."

    Solution: You know the drill. If tests belong on the same worker, we are betting that there is an implied, shared dependency. Express that shared dependency as a fixture and pytest will take care of the rest for you.

Counter-proposal to #17.
Addresses pytest-dev/pytest#175.
Adapted from this comment by @hpk42.

Distribution using --dist=each not working

I'm trying to distribute my tests over 2 thread and expecting to run them parallelly.
i.e. Suppose I have 2 tests and I want to execute those on two threads. Every thread should execute all tests.
2 test * 2 thread = actual number of test executed should be 4
For sample, i have created 2 tests


def test_xyz():
    print "xyz"

def test_abc():
    print "abc"

Execution result with --dist

$py.test test_xdist.py -svv -n 2 --dist=each
================================================================== test session starts ===================================================================
platform linux2 -- Python 2.7.10, pytest-2.9.0, py-1.4.31, pluggy-0.3.1 -- /home/gblp090/automation/CFAutomation/venv/bin/python
cachedir: .cache
rootdir: /home/gblp090/automation/CFAutomation, inifile: pytest.ini
plugins: allure-adaptor-1.7.2, variables-1.4, xdist-1.9, rerunfailures-2.0.0, html-1.8.1
[gw0] linux2 Python 2.7.10 cwd: /home/gblp090/automation/CFAutomation
[gw1] linux2 Python 2.7.10 cwd: /home/gblp090/automation/CFAutomation
[gw1] Python 2.7.10 (default, Oct 14 2015, 16:09:02)  -- [GCC 5.2.1 20151010]
[gw0] Python 2.7.10 (default, Oct 14 2015, 16:09:02)  -- [GCC 5.2.1 20151010]
gw0 [2] / gw1 [2]
scheduling tests via LoadScheduling
[gw1] PASSED test_xdist.py::test_xyz 
[gw0] PASSED test_xdist.py::test_abc 

====================================================== 2 passed, 2 pytest-warnings in 0.51 seconds =======================================================

Execution result without --dist

 $py.test test_xdist.py -n 2 -svv            
================================================================== test session starts ===================================================================
platform linux2 -- Python 2.7.10, pytest-2.9.0, py-1.4.31, pluggy-0.3.1 -- /home/gblp090/automation/CFAutomation/venv/bin/python
cachedir: .cache
rootdir: /home/gblp090/automation/CFAutomation, inifile: pytest.ini
plugins: allure-adaptor-1.7.2, variables-1.4, xdist-1.9, rerunfailures-2.0.0, html-1.8.1
[gw0] linux2 Python 2.7.10 cwd: /home/gblp090/automation/CFAutomation
[gw1] linux2 Python 2.7.10 cwd: /home/gblp090/automation/CFAutomation
[gw0] Python 2.7.10 (default, Oct 14 2015, 16:09:02)  -- [GCC 5.2.1 20151010]
[gw1] Python 2.7.10 (default, Oct 14 2015, 16:09:02)  -- [GCC 5.2.1 20151010]
gw0 [2] / gw1 [2]
scheduling tests via LoadScheduling
[gw0] PASSED test_xdist.py::test_xyz 
[gw1] PASSED test_xdist.py::test_abc 

====================================================== 2 passed, 2 pytest-warnings in 0.60 seconds =======================================================

And my requirment.txt is

selenium
fake-factory
ipython
pytest==2.9.0
pytest-html
pytest-xdist
pytest-rerunfailures
Django==1.9.4
GitPython==1.0.2
PyGithub==1.26.0
celery==3.1.21
eventlet==0.18.4
gunicorn==19.4.5
unittestzero
redis==2.10.5
pytest-allure-adaptor==1.7.2

So what i'm missing or Is this a bug.

pytest hangs indefinitely after completing tests in parallel

I run into this behavior when running web tests with Google App Engine. However, since this issue only occurs when using xdist, I do not believe it is caused by App Engine itself.

Run the following two tests in parallel. Tests will complete, but pytest will not finish:

def test_REMOVE():
    port = 9999
    admin_port = 9998
    cmd = '"{python}" "{gaepath}/dev_appserver.py" "{apppath}"' \
          ' -A test-{port} --port={port} --admin_port={adminport}' \
          ' --datastore_path=C:/tmp/test_datastore_{port}' \
          ' --clear_datastore=yes'.format(python=sys.executable,
                                          gaepath=GAE_PATH,
                                          apppath=HELLO_PATH,
                                          port=port, adminport=admin_port)
    app_engine = subprocess.Popen(cmd)

    time.sleep(8)

    app_engine.terminate()

def test_REMOVE_2():
    port = 10001
    admin_port = 10000
    cmd = '"{python}" "{gaepath}/dev_appserver.py" "{apppath}"' \
          ' -A test-{port} --port={port} --admin_port={adminport}' \
          ' --datastore_path=C:/tmp/test_datastore_{port}' \
          ' --clear_datastore=yes'.format(python=sys.executable,
                                          gaepath=GAE_PATH,
                                          apppath=HELLO_PATH,
                                          port=port, adminport=admin_port)
    app_engine = subprocess.Popen(cmd)

    time.sleep(8)

    app_engine.terminate()

I ran the tests with the following line:

py.test -k REMOVE -n 2

The result is:

============================= test session starts =============================
platform win32 -- Python 2.7.11, pytest-2.9.1, py-1.4.31, pluggy-0.3.1
rootdir: ..., inifile:
plugins: cov-2.2.1, xdist-1.14
gw0 [2] / gw1 [2]
scheduling tests via LoadScheduling
..

Killing the process results in the following traceback:

Traceback (most recent call last):
  File "c:\python27\lib\runpy.py", line 162, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "c:\python27\lib\runpy.py", line 72, in _run_code
    exec code in run_globals
  File "C:\Python27\Scripts\py.test.exe\__main__.py", line 9, in <module>
  File "c:\python27\lib\site-packages\_pytest\config.py", line 49, in main
    return config.hook.pytest_cmdline_main(config=config)
  File "c:\python27\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 724, in __call__
    return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
  File "c:\python27\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 338, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "c:\python27\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 333, in <lambda>
    _MultiCall(methods, kwargs, hook.spec_opts).execute()
  File "c:\python27\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 596, in execute
    res = hook_impl.function(*args)
  File "c:\python27\lib\site-packages\_pytest\main.py", line 119, in pytest_cmdline_main
    return wrap_session(config, _main)
  File "c:\python27\lib\site-packages\_pytest\main.py", line 114, in wrap_session
    exitstatus=session.exitstatus)
  File "c:\python27\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 724, in __call__
    return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
  File "c:\python27\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 338, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "c:\python27\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 333, in <lambda>
    _MultiCall(methods, kwargs, hook.spec_opts).execute()
  File "c:\python27\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 595, in execute
    return _wrapped_call(hook_impl.function(*args), self.execute)
  File "c:\python27\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 249, in _wrapped_call
    wrap_controller.send(call_outcome)
  File "c:\python27\lib\site-packages\_pytest\terminal.py", line 363, in pytest_sessionfinish
    outcome.get_result()
  File "c:\python27\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 279, in get_result
    _reraise(*ex)  # noqa
  File "c:\python27\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 264, in __init__
    self.result = func()
  File "c:\python27\lib\site-packages\_pytest\vendored_packages\pluggy.py", line 596, in execute
    res = hook_impl.function(*args)
  File "c:\python27\lib\site-packages\xdist\dsession.py", line 517, in pytest_sessionfinish
    nm.teardown_nodes()
  File "c:\python27\lib\site-packages\xdist\slavemanage.py", line 62, in teardown_nodes
    self.group.terminate(self.EXIT_TIMEOUT)
  File "c:\python27\lib\site-packages\execnet\multi.py", line 208, in terminate
    for gw in self._gateways_to_join])
  File "c:\python27\lib\site-packages\execnet\multi.py", line 297, in safe_terminate
    workerpool.waitall()
  File "c:\python27\lib\site-packages\execnet\gateway_base.py", line 325, in waitall
    return my_waitall_event.wait(timeout=timeout)
  File "c:\python27\lib\threading.py", line 614, in wait
    self.__cond.wait(timeout)
  File "c:\python27\lib\threading.py", line 340, in wait
    waiter.acquire()
KeyboardInterrupt

Finally, I am on windows.

Note that this test is run with the hello world application given by Google. Any other app engine application I've tested causes the same hang.

Also, I have tried a Popen that simply runs python, and this does not result in the same hang.

Overhead of pytest-xdist is often greater than time saved for parallelization

(I'm pretty sure this is a known issue, but I didn't see an open issue for it, and wanted there to be a canonical place to track it)

Example:

(cryptography) ~/p/cryptography (master) $ py.test tests/test_x509*.py
====================================================== test session starts =======================================================
platform darwin -- Python 2.7.10[pypy-4.0.1-final], pytest-2.8.7, py-1.4.31, pluggy-0.3.1
rootdir: /Users/alex_gaynor/projects/cryptography, inifile: tox.ini
plugins: hypothesis-2.0.0, xdist-1.14
collected 492 items

tests/test_x509.py .........................................................................................................................................................................................
tests/test_x509_crlbuilder.py ............................
tests/test_x509_ext.py .......................................................................................................................................................................................................................................................................
tests/test_x509_revokedcertbuilder.py ................

=================================================== 492 passed in 3.60 seconds ===================================================
(cryptography) ~/p/cryptography (master) $ py.test -n 2 tests/test_x509*.py
====================================================== test session starts =======================================================
platform darwin -- Python 2.7.10[pypy-4.0.1-final], pytest-2.8.7, py-1.4.31, pluggy-0.3.1
rootdir: /Users/alex_gaynor/projects/cryptography, inifile: tox.ini
plugins: hypothesis-2.0.0, xdist-1.14
gw0 [492] / gw1 [492]
scheduling tests via LoadScheduling
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
=================================================== 492 passed in 5.00 seconds ===================================================
(cryptography) ~/p/cryptography (master) $ py.test -n 2 tests/test_x509*.py
====================================================== test session starts =======================================================
platform darwin -- Python 2.7.10[pypy-4.0.1-final], pytest-2.8.7, py-1.4.31, pluggy-0.3.1
rootdir: /Users/alex_gaynor/projects/cryptography, inifile: tox.ini
plugins: hypothesis-2.0.0, xdist-1.14
gw0 [492] / gw1 [492]
scheduling tests via LoadScheduling
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
=================================================== 492 passed in 4.62 seconds ===================================================
(cryptography) ~/p/cryptography (master) $ py.test -n 4 tests/test_x509*.py
====================================================== test session starts =======================================================
platform darwin -- Python 2.7.10[pypy-4.0.1-final], pytest-2.8.7, py-1.4.31, pluggy-0.3.1
rootdir: /Users/alex_gaynor/projects/cryptography, inifile: tox.ini
plugins: hypothesis-2.0.0, xdist-1.14
gw0 [492] / gw1 [492] / gw2 [492] / gw3 [492]
scheduling tests via LoadScheduling
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
=================================================== 492 passed in 6.23 seconds ===================================================
(cryptography) ~/p/cryptography (master) $ py.test -n 4 tests/test_x509*.py
====================================================== test session starts =======================================================
platform darwin -- Python 2.7.10[pypy-4.0.1-final], pytest-2.8.7, py-1.4.31, pluggy-0.3.1
rootdir: /Users/alex_gaynor/projects/cryptography, inifile: tox.ini
plugins: hypothesis-2.0.0, xdist-1.14
gw0 [492] / gw1 [492] / gw2 [492] / gw3 [492]
scheduling tests via LoadScheduling
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
=================================================== 492 passed in 6.73 seconds ===================================================
(cryptography) ~/p/cryptography (master) $ py.test -n 4 tests/test_x509*.py
====================================================== test session starts =======================================================
platform darwin -- Python 2.7.10[pypy-4.0.1-final], pytest-2.8.7, py-1.4.31, pluggy-0.3.1
rootdir: /Users/alex_gaynor/projects/cryptography, inifile: tox.ini
plugins: hypothesis-2.0.0, xdist-1.14
gw0 [492] / gw1 [492] / gw2 [492] / gw3 [492]
scheduling tests via LoadScheduling
............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
=================================================== 492 passed in 6.53 seconds ===================================================

When run with -n opt, AttributeError: 'Config' object has no attribute '_tmpdirhandler'

$ py.test -n auto common/tests/test_sys.py
================================================= test session starts ==================================================
platform darwin -- Python 2.7.9, pytest-2.8.3, py-1.4.31, pluggy-0.3.1
django settings: proj.settings.local (from environment variable)
rootdir: /Users/khj/works/projrepo/proj, inifile: pytest.ini
plugins: cov-2.1.0, django-2.9.1, xdist-1.13.1
gw0 C / gw1 I / gw2 I / gw3 I / gw4 I / gw5 I / gw6 I / gw7 IINTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "/Users/khj/.pyenv/versions/proj/lib/python2.7/site-packages/_pytest/main.py", line 88, in wrap_session
INTERNALERROR>     config.hook.pytest_sessionstart(session=session)
INTERNALERROR>   File "/Users/khj/.pyenv/versions/proj/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 724, in __call__
INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR>   File "/Users/khj/.pyenv/versions/proj/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 338, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "/Users/khj/.pyenv/versions/proj/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 333, in <lambda>
INTERNALERROR>     _MultiCall(methods, kwargs, hook.spec_opts).execute()
INTERNALERROR>   File "/Users/khj/.pyenv/versions/proj/lib/python2.7/site-packages/_pytest/vendored_packages/pluggy.py", line 596, in execute
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "/Users/khj/.pyenv/versions/proj/lib/python2.7/site-packages/xdist/dsession.py", line 494, in pytest_sessionstart
INTERNALERROR>     nodes = self.nodemanager.setup_nodes(putevent=self.queue.put)
INTERNALERROR>   File "/Users/khj/.pyenv/versions/proj/lib/python2.7/site-packages/xdist/slavemanage.py", line 45, in setup_nodes
INTERNALERROR>     nodes.append(self.setup_node(spec, putevent))
INTERNALERROR>   File "/Users/khj/.pyenv/versions/proj/lib/python2.7/site-packages/xdist/slavemanage.py", line 54, in setup_node
INTERNALERROR>     node.setup()
INTERNALERROR>   File "/Users/khj/.pyenv/versions/proj/lib/python2.7/site-packages/xdist/slavemanage.py", line 216, in setup
INTERNALERROR>     basetemp = self.config._tmpdirhandler.getbasetemp()
INTERNALERROR> AttributeError: 'Config' object has no attribute '_tmpdirhandler'
  • Without -n option, runs well.

Collection fails with non-deterministic (e.g. random) parametrised tests

I have a test that looks something like this:

@pytest.mark.parametrise('sequence', [
    [random.choice(my_list) for _ in range(n)]
])
def test_foo():
    pass  # Test implementation is irrelevant here

This fails at collection time with an error like:

collecting 0 items / 1 errors
============================================================ ERRORS =============================================================
_____________________________________________________ ERROR collecting gw1 ______________________________________________________
Different tests were collected between gw0 and gw1. The difference is:
--- gw0

+++ gw1

@@ -213,7 +213,7 @@

-test_foo.py::test_foo[[a, b, c]]
+test_foo.py::test_foo[[b, c, a]]

(The diff has some context and I've omitted some details specific to my environment, but hopefully it's clear.)

So it looks like xdist cares that each worker gets exactly the same list of tests in the same order: why is this? With a randomised test, I don't much care if I run with one random set of parameters or another: they're randomised on purpose. (I've also come across this issue where I generate parameters into an unordered collection like a set.)

Other than using a fixed seed for the RNG (e.g. random.seed(0)), which defeats the purpose of the randomisation, is there a workaround for this? Or can something be done inside xdist to permit this sort of parametrised test?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.