Comments (4)
If we are to implement this, I propose we keep it simple, as it would be possible to also write my_device_under_test(with_serial=1234) or my_device_under_test(with_serial=5678)
.
from pytest.
I can see the appeal of this, and I agree in theory it makes sense to keep things close to a Python-like syntax.
However, at the same time, I'm also worried this might be a bottomless pit. Where do we stop in terms of types of arguments we support? Allowing my_device_under_test(with_serial=1234)
or use_case("ABC-1234")
seems obvious (and would indeed cover things I've seen various companies implement custom arguments/filtering via plugin hooks for).
However, what about, say, enums? Surely there's an argument to be made for enums, as in size(Size.end2end)
. But then where is Size
coming from? I'm playing devil's advocate here, but if we allow enums, why wouldn't we also allow, I don't know, custom dataclass instances (in the sense of device(Multimeter(...))
)? Etc. etc., and then at some point we're back to essentially reimplementing the eval()
solution we had at some point.
Maybe though, we could limit this to int
/ str
/ bool
/ None
or somesuch, or perhaps what ast.literal_eval
supports. But still, while I can absolutely see the need and usefulness, it seems very tempting to take this too far over time.
from pytest.
Running into this thought myself, I was wondering if we could instead consider a bit of a tangent to avoid pitfalls mentioned by @The-Compiler. How about introducing a new interface called pytest.mark.annotated(*args: str)
?
It would look like:
@pytest.mark.annotated("annotation1", "annotation2", ...)
def test_my_function(...):
pass
The selection can be with a slightly specialized syntax:
pytest -m [annotation1]
which can be combined with other non-annotated markers if needed:
pytest -m "awesome and [annotation1] and [annotation2]"
This is surely limiting, but maybe that's not a bad thing. For my use case, this would help with adding bug number/jira ticket annotations to test cases.
from pytest.
The feature request makes sense to me, I think it would be great to allow matching on mark arguments (there are two kinds -- positional and keyword arguments), and something like the function call syntax seems natural for this.
For the match expression grammar, it shouldn't pose a big challenge to add. We would need to decide whether the argument values (the 1234
in your example) can be just idents (simple), or more complex, e.g. to support my_device_under_test(with_serial=1234 or 5678)
etc., which would be more complex.
The bigger implementation issue would be the evaluation of the expression. Currently, to optimize the evaluation, we compile the expression to a python AST, then eval
the AST with an environment ("locals") where every ident returns either True (if matches) or False. This will need to be extended to support the function-call syntax. As a quick thought, we could replace the locals
hack with injecting some $match
symbol into the eval
environment which has the form matcher(name, *args: str, **kwargs: str)
(assuming we only allow idents).
from pytest.
Related Issues (20)
- Erroneous quotes in `tmp_path_retention_policy` example in docs
- Tests not collected if parametrized with `itertools` objects and imported in multiple files HOT 3
- Doctests in __main__ are ignored HOT 3
- [TODO] Upload JUnit reports to Codecov HOT 4
- No tests are run when PyTest encounters some error condition on some of them HOT 1
- Warn on test arguments with default values? HOT 1
- [8.1.x][8.3.x] `testpaths` not taken into account for initial conftests loading when set through configuration file? HOT 4
- Surprisingly, pytest doesn't collect warnings when importing the package for plugins HOT 4
- Support navigating exceptions chains in pdb
- short test summary and -vv
- --failed-first and --new-first don't play well together
- Please make something working HOT 1
- Enable test cases to be executed in a coroutine manner HOT 1
- pytest collection appears to stall/slow down/jam up when some third-party libraries are used; add function to ignore specific modules HOT 2
- Regression in testdir fixture HOT 11
- Fixture with session scope runs more than once unless placed in conftest.py HOT 4
- Running pytest on a file importing pandas and torch.onnx leads to access violation HOT 4
- Same named test functions in different `Pytester` tests will use the same `tmp_path` location HOT 4
- Errors raised in fixture cleanup is shown to originate from last test in the scope, not a test using the fixture HOT 1
- "@" syntax to read in tests from files doesn't seem to work for parameterized files HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytest.