Giter VIP home page Giter VIP logo

python-pytest-steps's Introduction

pytest-steps

Create step-wise / incremental tests in pytest.

Python versions Pytest versions Build Status Tests Status Coverage Status codecov Flake8 Status

Documentation PyPI Downloads Downloads per week GitHub stars

This is the readme for developers. The documentation for users is available here: https://smarie.github.io/python-pytest-steps/

Want to contribute ?

Contributions are welcome ! Simply fork this project on github, commit your contributions, and create pull requests.

Here is a non-exhaustive list of interesting open topics: https://github.com/smarie/python-pytest-steps/issues

nox setup

This project uses nox to define all lifecycle tasks. In order to be able to run those tasks, you should create python 3.7 environment and install the requirements:

>>> conda create -n noxenv python="3.7"
>>> activate noxenv
(noxenv) >>> pip install -r noxfile-requirements.txt

You should then be able to list all available tasks using:

>>> nox --list
Sessions defined in <path>\noxfile.py:

* tests-2.7 -> Run the test suite, including test reports generation and coverage reports.
* tests-3.5 -> Run the test suite, including test reports generation and coverage reports.
* tests-3.6 -> Run the test suite, including test reports generation and coverage reports.
* tests-3.8 -> Run the test suite, including test reports generation and coverage reports.
* tests-3.7 -> Run the test suite, including test reports generation and coverage reports.
- docs-3.7 -> Generates the doc and serves it on a local http server. Pass '-- build' to build statically instead.
- publish-3.7 -> Deploy the docs+reports on github pages. Note: this rebuilds the docs
- release-3.7 -> Create a release on github corresponding to the latest tag

Running the tests and generating the reports

This project uses pytest so running pytest at the root folder will execute all tests on current environment. However it is a bit cumbersome to manage all requirements by hand ; it is easier to use nox to run pytest on all supported python environments with the correct package requirements:

nox

Tests and coverage reports are automatically generated under ./docs/reports for one of the sessions (tests-3.7).

If you wish to execute tests on a specific environment, use explicit session names, e.g. nox -s tests-3.6.

Editing the documentation

This project uses mkdocs to generate its documentation page. Therefore building a local copy of the doc page may be done using mkdocs build -f docs/mkdocs.yml. However once again things are easier with nox. You can easily build and serve locally a version of the documentation site using:

>>> nox -s docs
nox > Running session docs-3.7
nox > Creating conda env in .nox\docs-3-7 with python=3.7
nox > [docs] Installing requirements with pip: ['mkdocs-material', 'mkdocs', 'pymdown-extensions', 'pygments']
nox > python -m pip install mkdocs-material mkdocs pymdown-extensions pygments
nox > mkdocs serve -f ./docs/mkdocs.yml
INFO    -  Building documentation...
INFO    -  Cleaning site directory
INFO    -  The following pages exist in the docs directory, but are not included in the "nav" configuration:
  - long_description.md
INFO    -  Documentation built in 1.07 seconds
INFO    -  Serving on http://127.0.0.1:8000
INFO    -  Start watching changes
...

While this is running, you can edit the files under ./docs/ and browse the automatically refreshed documentation at the local http://127.0.0.1:8000 page.

Once you are done, simply hit <CTRL+C> to stop the session.

Publishing the documentation (including tests and coverage reports) is done automatically by the continuous integration engine, using the nox -s publish session, this is not needed for local development.

Packaging

This project uses setuptools_scm to synchronise the version number. Therefore the following command should be used for development snapshots as well as official releases: python setup.py sdist bdist_wheel. However this is not generally needed since the continuous integration engine does it automatically for us on git tags. For reference, this is done in the nox -s release session.

Merging pull requests with edits - memo

Ax explained in github ('get commandline instructions'):

git checkout -b <git_name>-<feature_branch> main
git pull https://github.com/<git_name>/python-pytest-steps.git <feature_branch> --no-commit --ff-only

if the second step does not work, do a normal auto-merge (do not use rebase!):

git pull https://github.com/<git_name>/python-pytest-steps.git <feature_branch> --no-commit

Finally review the changes, possibly perform some modifications, and commit.

python-pytest-steps's People

Contributors

j-carson avatar keszybz avatar obestwalter avatar r4lv avatar smarie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

python-pytest-steps's Issues

Travis CI gets stuck on forked repo

Dear Sylvain,

I forked python-pytest-steps and started a Travis CI build. However, the CI gets stuck at the after_success step, waiting for:

Enter passphrase for ci_tools/github_travis_rsa: 

(please have a look at the full logs on https://travis-ci.com/r4lv/python-pytest-steps)

Maybe the deployment step could be deactivated for forked repos by testing the GitHub URL in the .travis.yml?

pytest-harvest + pytest-steps: results_bag fixture is cross-step, how to get per-step?

I have just started using your pytest extensions - they are wonderful, thank you!

I had a pytest-steps test with four steps, and each step saved the current value of a variable at the end of each step. The results_bag only had the value of the variable at the last step, and that value appeared in the row for the first step.

I fixed it by creating a per-step results bag fixture and using that in place of results_bag as the argument to my test. Is this the best way to get my desired behavior?

@pytest.fixture
@one_fixture_per_step
def step_bag(request):
    """pytest-harvest creates a results_bag per step so that it
    can calculate duration_ms per step, but results_bag seems
    to not be updated per step in my tests. This fixture
    explicitly grabs the current step's bag.

    Parameters
    ----------
    request: current running test object
        Fixture provided by pytest
    """
    return request.getfixturevalue("results_bag")

Use of deprecated getfuncargvalue

Here is the warning: steps_parametrizer.py:94: RemovedInPytest4Warning: getfuncargvalue is deprecated, use getfixturevalue

Is it planned to be fixed?
Thanks

Python 3.8 incompatibility: DeprecationWarning: collections.abc

Python 3.7 is showing the following

pytest_steps/steps_generator.py:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import Iterable as It

I guess that means it is broken on py38

Be able to call a decorated test function manually

In benchmarking situations we do not want the first run of a test to perform all the indirect imports otherwise its duration is overly big.

Therefore there is a need to run the test function "by hand" once. This is quite difficult right now when the function is decorated with @test_steps...

order inverted

image
As can seen from the image, not sure with which plugin it is not coming along, but the step names are in wrong order

Improve packaging

  • py.typed
  • setup.cfg universal_wheel
  • setup.py remove six dependency
  • setup.py zip_safe=False
  • setup.py remove tests folder from package.

`StepYieldError` when step ids are present in other parameter ids

import pytest
from pytest_steps import test_steps

@test_steps('a', 'b')
@pytest.mark.parametrize('dummy_param', ['a', 'b'], ids="p={}".format)
def test_step_id_conflicts(dummy_param):
    yield 'a'
    yield 'b'

yields:

StepYieldError: Error collecting results from step 'b': received 'a' from the `yield` statement, which is different from the current step or step name. Please either use `yield`, `yield 'b'` or wrap your step with `with optional_step(...) as my_step:` and use `yield my_step`

The issue happens because we generate a unique id for each combination of parameters except step. This unique id is based on the string id right now, where a conflict happens because the string id of the step parameter happends in the string id of the other.

pandas deprecation warning in steps_pytest_harvest_utils.py

At line 86 of steps_pytest_harvest_utils.py, the columns have a single level index on the left and a two level index on the right. This is causing a pandas deprecation warning.

Test case: insert the following into tests/test_steps_harvest.py at line 64 and run the library test suite.

import warnings
warnings.warn("error")

You could perhaps fix the warning with the flatten_multilevel_columns function, but the column name change might affect existing tests.

Pytest marks on generator test step?

Hello, is it possible somehow when using generator steps to mark a specific step, for example, with the @pytest.mark.xfail, skip or other marks? Thank you in advance.

TypeError: test_suite() missing 2 required positional arguments: '________step_name_' and 'request'

from pytest_steps import test_steps
from seleniumbase import BaseCase

class MMMM(BaseCase):

@test_steps('step_a', 'step_b', 'step_c')
def test_suite(self):
    # Step A
    print("step a")
    assert not False  # replace with your logic
    intermediate_a = 'hello'
    yield

    # Step B
    print("step b")
    assert not False  # replace with your logic
    yield

    # Step C
    print("step c")
    new_text = intermediate_a + " ... augmented"
    print(new_text)
    assert len(new_text) == 56
    yield

marking pytest.fixture as a step in the final report

Hey, first of all - big fan of your plugin.
I'm trying to use it on top of a parametrized test, however, all combinations use the common setup which i obviously try to move to the pytest fixture, to be performed just once. However, this way i can't mark those actions as a step, so in the final report nobody sees they were happening.

what i'd like to achieve is to end up with the following report:
(given test_e2e with params ['param1', 'param2'] and steps ['step1', 'step2']and a fixture fixture1:

collected 5 items                                                                            

test.py::test_e2e[fixture1]     PASSED                                           [ 20%]
test.py::test_e2e[step1-param1] PASSED                                           [ 40%]
test.py::test_e2e[step1-param2] PASSED                                           [ 60%]
test.py::test_e2e[step2-param1] PASSED                                           [ 80%]
test.py::test_e2e[step2-param2] PASSED                                           [100%]

=============================== 4 passed in 0.02 seconds ===============================

do you think this is achievable in any way? (i don't insist on using the fixtures, however i don't want to generate that step for every parameter)

`@cross_steps_fixture` does not wait for the last step to perform teardown

From reading the code I realize that there is nothing preventing pytest to execute the fixture teardown after the first step. So even if you decorate the fixture with @cross_steps_fixture, the value will be reused but it might be disfunctional.

Teardown hook should therefore be executed after all steps

Challenges:

  • there are several teardown hooks for fixtures as of today. We have to make sure that we capture all of them and replace them with dummy ones - and finally call them after the last step
  • we have to know the name of the last step also...

1.8.0: pytest is failing

I'm trying to package your module as an rpm package. So I'm using the typical PEP517 based build, install and test cycle used on building packages from non-root account.

  • python3 -sBm build -w
  • install .whl file in </install/prefix>
  • run pytest with PYTHONPATH pointing to sitearch and sitelib inside </install/prefix>

First just run pytest:

+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-pytest-steps-1.8.0-2.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-pytest-steps-1.8.0-2.fc35.x86_64/usr/lib/python3.8/site-packages
+ /usr/bin/pytest -ra
=========================================================================== test session starts ============================================================================
platform linux -- Python 3.8.12, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/tkloczko/rpmbuild/BUILD/python-pytest-steps-1.8.0, configfile: setup.cfg, testpaths: pytest_steps/tests/
plugins: steps-1.8.0
collected 0 items / 1 error

================================================================================== ERRORS ==================================================================================
______________________________________________________________________ ERROR collecting test session _______________________________________________________________________
/usr/lib64/python3.8/importlib/__init__.py:127: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:1014: in _gcd_import
    ???
<frozen importlib._bootstrap>:991: in _find_and_load
    ???
<frozen importlib._bootstrap>:961: in _find_and_load_unlocked
    ???
<frozen importlib._bootstrap>:219: in _call_with_frames_removed
    ???
<frozen importlib._bootstrap>:1014: in _gcd_import
    ???
<frozen importlib._bootstrap>:991: in _find_and_load
    ???
<frozen importlib._bootstrap>:973: in _find_and_load_unlocked
    ???
E   ModuleNotFoundError: No module named 'pytest_steps.tests'
========================================================================= short test summary info ==========================================================================
ERROR  - ModuleNotFoundError: No module named 'pytest_steps.tests'
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
============================================================================= 1 error in 0.22s =============================================================================

Because pytest_steps.tests is not installed I've been trying to preload it using --pyargs:

+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-pytest-steps-1.8.0-2.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-pytest-steps-1.8.0-2.fc35.x86_64/usr/lib/python3.8/site-packages
+ /usr/bin/pytest -ra --pyargs pytest_steps.tests
=========================================================================== test session starts ============================================================================
platform linux -- Python 3.8.12, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 -- /usr/bin/python3
cachedir: .pytest_cache
rootdir: /home/tkloczko/rpmbuild/BUILD/python-pytest-steps-1.8.0, configfile: setup.cfg
plugins: steps-1.8.0
collected 0 items

========================================================================== no tests ran in 0.02s ===========================================================================
ERROR: module or package not found: pytest_steps.tests (missing __init__.py?)

Segfault in travis when executing test inside meta tester

this happened today : log

pytest_steps/tests/test_all.py::test_run_all_tests[test_docs_example_with_harvest.py] Segmentation fault (core dumped)

From the various configurations I tried, my best guess is that the meta-testing plugin (pytester) is causing the segfault when there are too many tests run especially inside "big" conda environments with pandas, etc.

Workaround: try to put as many tests as possible not in meta-test.

received StopIteration if yield not present. yeah? but what should i do inside an exception block

i have some code

# some inputs
@pytest.mark.parametrize(
    "real_number, precision, expected_result",
    [
        (123.123, 2, "123.12"),
        ("xasd123b.x", 2, "123.123"),
        ("x.x", 2, "123.123"),
    ]
)
# some steps from your package (great btw, i love it)
@test_steps(
    "correct number of decimals ?",
    "is type str ?",
    "result == expected ?"
)
# some params for my custom function
def test_fixed_set_precision_str(
    real_number: float | str,
    precision: int,
    expected_result: float,
):
    try:
        # we know that this raises some errors 
        # when the @real_number its not valid
        result = fixed_set_precision_str(real_number, precision)
        print(result)

    except TypeError as error:
        # here its the problem with pytest-steps
        # cuz here its not yield and pytest raises error
        # because it receives StopIteration
        print(error)

    except ValueError as error:
        # same thing
        # here its the problem with pytest-steps
        # cuz here its not yield and pytest raises error
        # because it receives StopIteration
        print(error)
    else:
        _decimals = get_total_decimals(result)
        
        # step 1
        # correct number of decimals ?
        assert _decimals == precision
        yield

        # step 2
        # is type str ?
        assert isinstance(result, str)
        yield

        # step 3
        # result == expected ?
        assert result == expected_result
        yield

in pytest output i do get this error:

ytest_steps.steps_generator.StepExecutionError: Error executing step 'correct number of decimals ?': could not reach the next `yield` statement (received
 `StopIteration`). This may be caused by use of a `return` statement instead of a `yield`, or by a missing `yield`

i understand why, but i dont know how to fix this, because steps is always expecting a generator.

thanks in advance.

Plugin fails to be detected when pytest-harvest is not present

  File "/home/travis/miniconda/envs/test-environment/lib/python2.7/site-packages/pytest_steps/plugin.py", line 3, in <module>
    from pytest_steps import pivot_steps_on_df, handle_steps_in_results_df
ImportError: cannot import name pivot_steps_on_df
The command "python -m pytest --version" failed and exited with 1 during .

Tool to declare a fixture as "cross-steps" explicitly

As of today

  • In generator mode all fixtures are "cross-steps", however the fixture function is still called several times, once for each step. This is useless, because the corresponding values are just lost.
  • In parametrizer mode all fixtures are "one per step", and there is no way to declare a fixture as cross-steps.
  • When pytest-harvest is used, the pivot should take into account if a fixture is in one mode or the other. As of today it does not have a clue.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.