cfmtech / pytest-monitor Goto Github PK
View Code? Open in Web Editor NEWPytest plugin for analyzing resource usage during test sessions
License: MIT License
Pytest plugin for analyzing resource usage during test sessions
License: MIT License
Describe the bug
When we add --no-monitor
to the pytest.ini
like so addopts = --no-monitor
, pytest.raises
, django_assert_num_queries
and django_assert_max_num_queries
(the latter two are provided by pytest-django
) do not work anymore. They should be raising test failures.
To Reproduce
Steps to reproduce the behavior:
0. (install django, pytest, pytetst-monitor and pytest-django)
--no-monitor
=> all tests pass--no-monitor
=> all tests failimport pytest
from books.models import Book
def test_raise_exception():
with pytest.raises(Exception):
x = 1 / 1
def test_query_assertion(django_assert_num_queries, db):
with django_assert_num_queries(0):
print(Book.objects.all())
def test_max_queries_assertion(django_assert_max_num_queries, db):
with django_assert_max_num_queries(0):
print(Book.objects.all())
Expected behavior
The tests should fail, even though the --no-monitor
flag was provided.
Desktop (please complete the following information):
Additional context
Please ask if I missed to provide necessary information!
The following snippet causes pytest-monitor to wait endlessly:
import pytest
def test_that():
pytest.skip('This test is skipped')
Describe the bug
Got INTERNALERROR> AttributeError: 'ModuleWrapper' object has no attribute 'cpu_freq'
when start pytest
I guess psutil
has diffrent methods for different versions ๐
And need just to pin a minimal version which plugin supports
psutil.cpu_freq()
was added in 5.1.0
To Reproduce
pytest-monitor
pip3 install git+https://github.com/CFMTech/pytest-monitor.git
python3 -m pytest -lvv tests/backend/unit/
Expected behavior
run tests with pytest-monitor
plugin
Exeption
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "python3.7/site-packages/_pytest/main.py", line 105, in wrap_session
INTERNALERROR> config.hook.pytest_sessionstart(session=session)
INTERNALERROR> File "/python3.7/site-packages/pluggy/__init__.py", line 617, in __call__
INTERNALERROR> return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR> File "/python3.7/site-packages/pluggy/__init__.py", line 222, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/python3.7/site-packages/pluggy/__init__.py", line 216, in <lambda>
INTERNALERROR> firstresult=hook.spec_opts.get('firstresult'),
INTERNALERROR> File "/python3.7/site-packages/pluggy/callers.py", line 201, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/python3.7/site-packages/pluggy/callers.py", line 76, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/python3.7/site-packages/pluggy/callers.py", line 175, in _multicall
INTERNALERROR> next(gen) # first yield
INTERNALERROR> File "/python3.7/site-packages/pytest_monitor/pytest_monitor.py", line 174, in pytest_sessionstart
INTERNALERROR> session.config.option.mtr_tags)
INTERNALERROR> File "/python3.7/site-packages/pytest_monitor/session.py", line 80, in compute_info
INTERNALERROR> self.set_environment_info(ExecutionContext())
INTERNALERROR> File "/python3.7/site-packages/pytest_monitor/sys_utils.py", line 39, in __init__
INTERNALERROR> self.__cpu_freq_base = psutil.cpu_freq().current
INTERNALERROR> AttributeError: 'ModuleWrapper' object has no attribute 'cpu_freq'
Desktop (please complete the following information):
Additional context
psutil 2.1.1
Describe the bug
When running on a system whose architecture is such that psutil.cpu_freq()
returns None
, pytest-monitor
crashes pytest
on startup:
AttributeError: 'NoneType' object has no attribute 'current'
To Reproduce
An example of an architecture with this problem is 5.10.104-linuxkit
, the version used by the Docker image python:3.9-slim-buster
.
As of the current version 5.9.0, psutil
tries to find the CPU service in /sys/devices/system/cpu/cpufreq/policy0
or /sys/devices/system/cpu/cpu0/cpufreq
; if it can't find it there, it checks /proc/cpuinfo
for lines starting with cpu mhz
. But that doesn't always work (in my case the CPU was given only in BogoMIPS). In that case it returns None
, triggering this crash.
Expected behavior
Prefer that pytest-monitor
fails gracefully in this case, or perhaps defaults CPU utilization to zero and emits a warning.
Desktop (please complete the following information):
Is your feature request related to a problem? Please describe.
To ease maintainance and for code clarity we should use HTTP Code from the http.HTTPStatus enumeration when sending results to a monitor server.
Describe the solution you'd like
Use http.HTTPStatus enumeration
Describe alternatives you've considered
Hardcoding http code is not a good solution as it does not allow clear understanding between implementation.
When reading dataframes with test information it would be very helpful to have a column indicating if test succeed or failed.
This should be done for each test and consequently for each component in order to bring clarity for data analysis.
We have a project that starts a few child processes for which we run tests against.
Are there any plans to track child processes statistics?
Describe the solution you'd like
Include child processes stats on metrics.
Describe alternatives you've considered
Hack our own stats/metrics collector.
We have the --description parameter which helps us in providing a meaningfull description of a run.
It would be of help to transform this field in a json string to ease post run analysis filtering.
Example:
pytest --description "my description" --tag pandas=1.0.1 --tag pipeline_branch=master
Such command line would produce the following JSON string:
"{"description": "my description", "pandas":"1.0.1", "pipeline_branch":"master"}
This can clearly helps in filtering out sessions using some keys/values of the JSON String.
Describe the bug
I use captured print statements to build test reports. In using pytest-monitor, I notice that its usage produces the effect of repeating these statements.
To Reproduce
Steps to reproduce the behavior:
-s
flag to allow printing to console, observe multiple statements.-p no:monitor
, observe single statement.Expected behavior
The printing of a single statement.
Desktop (please complete the following information):
Kernel times and user times are taken using an absolute approach, which are hareder to exploit when analyzing trends.
The overhead is minimal as the Process object can be part of the session.
Object: gathering metric from CI should be automatic
Motivation:
Currently, if we want to collect build information (pipeline, build no, eventually URL), this must be done by hand. It would be easier if pytest-monitor could handle the following CI automagically:
Describe the bug
A local database is always created, even with --no-db option
To Reproduce
Steps to reproduce the behavior:
pytest --no-db
Expected behavior
No local database is created
Is your feature request related to a problem? Please describe.
I'd like to save the information that is provided by the Bitbucket CI to the database of test runs.
Describe the solution you'd like
pytest-monitor should automatically detect the branch name and build number provided by the bitbucket CI
Describe alternatives you've considered
No alternative considered.
I've already implemented the required changes and will raise a PR to close this issue.
When comparing stacks for the same SCM_ID
, it is hard to distinguish sets of measures. Extending the SCM_ID
or adding a DESCRIPTION
field can be a way to add some semantic to a pytest run.
Using the ITEM_START_TIME
or RUN_DATE
is insufficient if measures are taken on two equivalent ExecutionContexts
at the same time.
We must describe in the documentation the REST API used by pytest-monitor
Describe the bug
While creating and binding a socket in a test and then closing it allows the same port to be reopened later in the test, if the socket was originally created and bound in a fixture then it fails claiming OSError: [Errno 98] Address already in use
.
To Reproduce
Complete input and output below, but the tl;dr is...
================================================= test session starts ==================================================
platform linux -- Python 3.9.5, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /home/altendky/repos/chia-blockchain
plugins: monitor-1.6.3
collected 2 items
x.py F. [100%]
======================================================= FAILURES =======================================================
________________________________________________ test_a_socket[fixture] ________________________________________________
just_a_socket = ('fixture', <socket.socket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0>)
def test_a_socket(just_a_socket):
where, sock = just_a_socket
if where == "test":
print("creating and binding in test")
sock = make_a_bound_socket()
sock.close()
sock2 = socket.socket()
> sock2.bind(address)
E OSError: [Errno 98] Address already in use
x.py:37: OSError
------------------------------------------------ Captured stdout setup -------------------------------------------------
creating and binding in fixture
=============================================== short test summary info ================================================
FAILED x.py::test_a_socket[fixture] - OSError: [Errno 98] Address already in use
============================================= 1 failed, 1 passed in 0.09s ==============================================
import socket
import subprocess
import pytest
address = ("127.0.0.1", 33125)
def make_a_bound_socket():
sock = socket.socket()
sock.bind(address)
return sock
@pytest.fixture(params=["fixture", "test"])
def just_a_socket(request):
where = request.param
if where == "fixture":
print("creating and binding in fixture")
sock = make_a_bound_socket()
else:
sock = None
yield where, sock
def test_a_socket(just_a_socket):
where, sock = just_a_socket
if where == "test":
print("creating and binding in test")
sock = make_a_bound_socket()
sock.close()
sock2 = socket.socket()
sock2.bind(address)
https://gist.github.com/altendky/656bd59340288be28e8e3eb9a8d6f1b8
cat > x.py << EOF
import socket
import subprocess
import pytest
address = ("127.0.0.1", 33125)
def make_a_bound_socket():
sock = socket.socket()
sock.bind(address)
return sock
@pytest.fixture(params=["fixture", "test"])
def just_a_socket(request):
where = request.param
if where == "fixture":
print("creating and binding in fixture")
sock = make_a_bound_socket()
else:
sock = None
yield where, sock
def test_a_socket(just_a_socket):
where, sock = just_a_socket
if where == "test":
print("creating and binding in test")
sock = make_a_bound_socket()
sock.close()
sock2 = socket.socket()
sock2.bind(address)
def main():
subprocess.run(["pytest", "--capture", "no", __file__], check=True)
# yuck
if __name__ == "__main__":
main()
EOF
cat x.py
python3.9 -m venv venv
venv/bin/python --version --version
venv/bin/python -m pip install --upgrade pip setuptools wheel
venv/bin/pip install attrs==21.2.0 iniconfig==1.1.1 packaging==21.3 pluggy==1.0.0 py==1.11.0 pyparsing==3.0.6 pytest==6.2.5 toml==0.10.2
venv/bin/pip freeze
venv/bin/pytest x.py
venv/bin/pip install attrs==21.2.0 certifi==2021.10.8 charset-normalizer==2.0.9 idna==3.3 iniconfig==1.1.1 memory-profiler==0.60.0 packaging==21.3 pluggy==1.0.0 psutil==5.8.0 py==1.11.0 pyparsing==3.0.6 pytest==6.2.5 pytest-monitor==1.6.3 requests==2.26.0 toml==0.10.2 urllib3==1.26.7
venv/bin/pip freeze
venv/bin/pytest x.py
uname -a
lsb_release -a
$ cat > x.py << EOF
> import socket
> import subprocess
>
> import pytest
>
>
> address = ("127.0.0.1", 33125)
>
> def make_a_bound_socket():
> sock = socket.socket()
> sock.bind(address)
> return sock
>
>
> @pytest.fixture(params=["fixture", "test"])
> def just_a_socket(request):
> where = request.param
>
> if where == "fixture":
> print("creating and binding in fixture")
> sock = make_a_bound_socket()
> else:
> sock = None
>
> yield where, sock
>
>
> def test_a_socket(just_a_socket):
> where, sock = just_a_socket
> if where == "test":
> print("creating and binding in test")
> sock = make_a_bound_socket()
>
> sock.close()
>
> sock2 = socket.socket()
> sock2.bind(address)
>
>
> def main():
> subprocess.run(["pytest", "--capture", "no", __file__], check=True)
>
>
> # yuck
> if __name__ == "__main__":
> main()
> EOF
$ cat x.py
import socket
import subprocess
import pytest
address = ("127.0.0.1", 33125)
def make_a_bound_socket():
sock = socket.socket()
sock.bind(address)
return sock
@pytest.fixture(params=["fixture", "test"])
def just_a_socket(request):
where = request.param
if where == "fixture":
print("creating and binding in fixture")
sock = make_a_bound_socket()
else:
sock = None
yield where, sock
def test_a_socket(just_a_socket):
where, sock = just_a_socket
if where == "test":
print("creating and binding in test")
sock = make_a_bound_socket()
sock.close()
sock2 = socket.socket()
sock2.bind(address)
def main():
subprocess.run(["pytest", "--capture", "no", __file__], check=True)
# yuck
if __name__ == "__main__":
main()
$ python3.9 -m venv venv
$ venv/bin/python --version --version
Python 3.9.5 (default, Jun 3 2021, 15:18:23)
[GCC 9.3.0]
$ venv/bin/python -m pip install --upgrade pip setuptools wheel
Requirement already satisfied: pip in ./venv/lib/python3.9/site-packages (21.1.1)
Collecting pip
Using cached pip-21.3.1-py3-none-any.whl (1.7 MB)
Requirement already satisfied: setuptools in ./venv/lib/python3.9/site-packages (56.0.0)
Collecting setuptools
Using cached setuptools-60.1.0-py3-none-any.whl (952 kB)
Collecting wheel
Using cached wheel-0.37.1-py2.py3-none-any.whl (35 kB)
Installing collected packages: wheel, setuptools, pip
Attempting uninstall: setuptools
Found existing installation: setuptools 56.0.0
Uninstalling setuptools-56.0.0:
Successfully uninstalled setuptools-56.0.0
Attempting uninstall: pip
Found existing installation: pip 21.1.1
Uninstalling pip-21.1.1:
Successfully uninstalled pip-21.1.1
Successfully installed pip-21.3.1 setuptools-60.1.0 wheel-0.37.1
$ venv/bin/pip install attrs==21.2.0 iniconfig==1.1.1 packaging==21.3 pluggy==1.0.0 py==1.11.0 pyparsing==3.0.6 pytest==6.2.5 toml==0.10.2
Collecting attrs==21.2.0
Using cached attrs-21.2.0-py2.py3-none-any.whl (53 kB)
Collecting iniconfig==1.1.1
Using cached iniconfig-1.1.1-py2.py3-none-any.whl (5.0 kB)
Collecting packaging==21.3
Using cached packaging-21.3-py3-none-any.whl (40 kB)
Collecting pluggy==1.0.0
Using cached pluggy-1.0.0-py2.py3-none-any.whl (13 kB)
Collecting py==1.11.0
Using cached py-1.11.0-py2.py3-none-any.whl (98 kB)
Collecting pyparsing==3.0.6
Using cached pyparsing-3.0.6-py3-none-any.whl (97 kB)
Collecting pytest==6.2.5
Using cached pytest-6.2.5-py3-none-any.whl (280 kB)
Collecting toml==0.10.2
Using cached toml-0.10.2-py2.py3-none-any.whl (16 kB)
Installing collected packages: pyparsing, toml, py, pluggy, packaging, iniconfig, attrs, pytest
Successfully installed attrs-21.2.0 iniconfig-1.1.1 packaging-21.3 pluggy-1.0.0 py-1.11.0 pyparsing-3.0.6 pytest-6.2.5 toml-0.10.2
$ venv/bin/pip freeze
attrs==21.2.0
iniconfig==1.1.1
packaging==21.3
pluggy==1.0.0
py==1.11.0
pyparsing==3.0.6
pytest==6.2.5
toml==0.10.2
$ venv/bin/pytest x.py
================================================= test session starts ==================================================
platform linux -- Python 3.9.5, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /home/altendky/repos/chia-blockchain
collected 2 items
x.py .. [100%]
================================================== 2 passed in 0.01s ===================================================
$ venv/bin/pip install attrs==21.2.0 certifi==2021.10.8 charset-normalizer==2.0.9 idna==3.3 iniconfig==1.1.1 memory-profiler==0.60.0 packaging==21.3 pluggy==1.0.0 psutil==5.8.0 py==1.11.0 pyparsing==3.0.6 pytest==6.2.5 pytest-monitor==1.6.3 requests==2.26.0 toml==0.10.2 urllib3==1.26.7
Requirement already satisfied: attrs==21.2.0 in ./venv/lib/python3.9/site-packages (21.2.0)
Collecting certifi==2021.10.8
Using cached certifi-2021.10.8-py2.py3-none-any.whl (149 kB)
Collecting charset-normalizer==2.0.9
Using cached charset_normalizer-2.0.9-py3-none-any.whl (39 kB)
Collecting idna==3.3
Using cached idna-3.3-py3-none-any.whl (61 kB)
Requirement already satisfied: iniconfig==1.1.1 in ./venv/lib/python3.9/site-packages (1.1.1)
Collecting memory-profiler==0.60.0
Using cached memory_profiler-0.60.0-py3-none-any.whl
Requirement already satisfied: packaging==21.3 in ./venv/lib/python3.9/site-packages (21.3)
Requirement already satisfied: pluggy==1.0.0 in ./venv/lib/python3.9/site-packages (1.0.0)
Collecting psutil==5.8.0
Using cached psutil-5.8.0-cp39-cp39-manylinux2010_x86_64.whl (293 kB)
Requirement already satisfied: py==1.11.0 in ./venv/lib/python3.9/site-packages (1.11.0)
Requirement already satisfied: pyparsing==3.0.6 in ./venv/lib/python3.9/site-packages (3.0.6)
Requirement already satisfied: pytest==6.2.5 in ./venv/lib/python3.9/site-packages (6.2.5)
Collecting pytest-monitor==1.6.3
Using cached pytest_monitor-1.6.3-py3-none-any.whl (14 kB)
Collecting requests==2.26.0
Using cached requests-2.26.0-py2.py3-none-any.whl (62 kB)
Requirement already satisfied: toml==0.10.2 in ./venv/lib/python3.9/site-packages (0.10.2)
Collecting urllib3==1.26.7
Using cached urllib3-1.26.7-py2.py3-none-any.whl (138 kB)
Requirement already satisfied: wheel in ./venv/lib/python3.9/site-packages (from pytest-monitor==1.6.3) (0.37.1)
Installing collected packages: urllib3, psutil, idna, charset-normalizer, certifi, requests, memory-profiler, pytest-monitor
Successfully installed certifi-2021.10.8 charset-normalizer-2.0.9 idna-3.3 memory-profiler-0.60.0 psutil-5.8.0 pytest-monitor-1.6.3 requests-2.26.0 urllib3-1.26.7
$ venv/bin/pip freeze
attrs==21.2.0
certifi==2021.10.8
charset-normalizer==2.0.9
idna==3.3
iniconfig==1.1.1
memory-profiler==0.60.0
packaging==21.3
pluggy==1.0.0
psutil==5.8.0
py==1.11.0
pyparsing==3.0.6
pytest==6.2.5
pytest-monitor==1.6.3
requests==2.26.0
toml==0.10.2
urllib3==1.26.7
$ venv/bin/pytest x.py
================================================= test session starts ==================================================
platform linux -- Python 3.9.5, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /home/altendky/repos/chia-blockchain
plugins: monitor-1.6.3
collected 2 items
x.py F. [100%]
======================================================= FAILURES =======================================================
________________________________________________ test_a_socket[fixture] ________________________________________________
just_a_socket = ('fixture', <socket.socket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0>)
def test_a_socket(just_a_socket):
where, sock = just_a_socket
if where == "test":
print("creating and binding in test")
sock = make_a_bound_socket()
sock.close()
sock2 = socket.socket()
> sock2.bind(address)
E OSError: [Errno 98] Address already in use
x.py:37: OSError
------------------------------------------------ Captured stdout setup -------------------------------------------------
creating and binding in fixture
=============================================== short test summary info ================================================
FAILED x.py::test_a_socket[fixture] - OSError: [Errno 98] Address already in use
============================================= 1 failed, 1 passed in 0.09s ==============================================
$ uname -a
Linux p1 5.4.0-91-generic #102-Ubuntu SMP Fri Nov 5 16:31:28 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.3 LTS
Release: 20.04
Codename: focal
Expected behavior
I expect sockets created and bound in a fixture to be able to be closed.
Desktop (please complete the following information):
Additional context
I probably won't get to it tonight, but I will try to see what I can learn about pytest-monitor towards the end of fixing this. I tested back to pytest-monitor v1.0.0 and it seems to be present throughout.
Describe the bug
Using a remote server, no execution contexts are pushed from the pytest session.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
An execution contexts should always be pushed if the current one does not exists.
As discussed here #38 pytest
almost fully supports unittest
, but currently pytest-monitor
skips all unittest
-based tests.
I originally proposed to document this, but it was suggested that perhaps this can be fixed and pytest-monitor
could support unittest
-based tests too, which would be awesome.
So as suggested opening this Issue to track the feasibility/progress on this front.
Description
The recent delivery of the pytest-monitor does not incorporate the ability to send metrics to a remote server as the interface continue to use the previous data model.
Expected behavior
The new data model must be supported effectively.
Splitting off from #38 (comment) to focus the discussion on nuances of memory measurement side of this extension. And also I will bring up some memory leak detection issue, which I obviously don't expect pytest-monitor
should support, but perhaps something good will come out of it, just through discussing these issues with knowledgeable devs. I have a real need to try to identify tests with smallish leaks as these leaks add up requiring more RAM than it should. And when running thousands of tests under xdist/multiple pytest workers the memory requirements go up dramatically.
I noticed you aren't doing gc.collect()
before taking a new measurement, without which you may get incorrect reports, as most of the time GC doesn't run immediately on python object deletion. At least in all the profilers I have been developing I've been using this approach.
this was split off into a separate issue: #40
memory_profiler
that you use relies on RSS, which I also use mostly but it's not super-reliable if you get memory swapped out. Recently I discovered PSS - proportional set size - via smem
via this https://unix.stackexchange.com/a/169129/291728, and it really helped to solve the "process being killed because it used up its cgroups quota" which I used in the analysis of pytest
being killed on CI here: huggingface/transformers#11408 (please scroll down to smem discussion in item 4).
The question is how to measure it in python if it appears to be a better metric, like the C-program smem
does.
How can we tell how much memory test A used vs test B, if both tests load the same large module. It'd appear that test A used a lot of memory if it was used first, when in reality it could be far from the truth. So if the tests execution order were to be reversed the results would be completely different. This is especially so in envs where tests are loaded randomly by choice or because it's already so (e.g. xdist) - so for example in the context of pytest-monitor
if one were to compare different sessions one would get reports of varying memory, but it'd be false reports.
One possible solution is to somehow pre-run all the tests so that all the modules and any globals get pre-loaded and measure only the 2nd time these tests are re-run. This would be problem with pytest-xdist
, but let's say that for the sake of regression testing, one could run the full test suite with pytest -n 1
just to ensure that the approach is consistent. There are other ways to make the tests run in the same order, but it only works if no new tests are added.
It'd be great to be able to use such plugin for memory leak detection, in which case the first time any test gets loaded the measurement should be discarded and we would expect 0 extra memory usage for 2nd round (well, sometimes 3rd) if there is no leak. So far I've been struggling using RSS to get that 0, I get fluctuations in reported memory.
I guess this is tightly related to (3) and probably is really the same issue.
Continuing the memory leakage, how could we detect badly written tests, where a test doesn't clean completely up on teardown and leaves hanging objects that may continue consuming significant amounts of memory. This is very difficult to measure, because any test may appear to be such test if it loads some module for the first time.
Description
After starting a test session the plugin crashes. There is no errors while using only pytest.
Log says its an error with argparse.
To Reproduce
Steps to reproduce the behavior:
Log
Traceback (most recent call last):
File "c:\program files\python38\lib\runpy.py", line 193, in run_module_as_main
return run_code(code, main_globals, None,
File "c:\program files\python38\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Program Files\Python38\Scripts\pytest.exe_main.py", line 7, in
from connexion.decorators.uri_parsing import AlwaysMultiURIParser
File "c:\program files\python38\lib\site-packages_pytest\config_init.py", line 185, in console_main
code = main()
File "c:\program files\python38\lib\site-packages_pytest\config_init.py", line 143, in main
config = prepareconfig(args, plugins)
File "c:\program files\python38\lib\site-packages_pytest\config_init.py", line 318, in _prepareconfig
config = pluginmanager.hook.pytest_cmdline_parse(
File "C:\Users\Asus\AppData\Roaming\Python\Python38\site-packages\pluggy\hooks.py", line 286, in call
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "C:\Users\Asus\AppData\Roaming\Python\Python38\site-packages\pluggy\manager.py", line 93, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "C:\Users\Asus\AppData\Roaming\Python\Python38\site-packages\pluggy\manager.py", line 84, in
self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
File "C:\Users\Asus\AppData\Roaming\Python\Python38\site-packages\pluggy\callers.py", line 203, in multicall
gen.send(outcome)
File "c:\program files\python38\lib\site-packages_pytest\helpconfig.py", line 100, in pytest_cmdline_parse
config: Config = outcome.get_result()
File "C:\Users\Asus\AppData\Roaming\Python\Python38\site-packages\pluggy\callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "C:\Users\Asus\AppData\Roaming\Python\Python38\site-packages\pluggy\callers.py", line 187, in multicall
res = hook_impl.function(*args)
File "c:\program files\python38\lib\site-packages_pytest\config_init.py", line 1003, in pytest_cmdline_parse
self.parse(args)
File "c:\program files\python38\lib\site-packages_pytest\config_init.py", line 1283, in parse
self.preparse(args, addopts=addopts)
File "c:\program files\python38\lib\site-packages_pytest\config_init.py", line 1175, in _preparse
self.known_args_namespace = self._parser.parse_known_args(
File "c:\program files\python38\lib\site-packages_pytest\config\argparsing.py", line 146, in parse_known_args
return self.parse_known_and_unknown_args(args, namespace=namespace)[0]
File "c:\program files\python38\lib\site-packages_pytest\config\argparsing.py", line 155, in parse_known_and_unknown_args
optparser = self._getparser()
File "c:\program files\python38\lib\site-packages_pytest\config\argparsing.py", line 122, in _getparser
arggroup.add_argument(*n, **a)
File "c:\program files\python38\lib\argparse.py", line 1386, in add_argument
return self._add_action(action)
File "c:\program files\python38\lib\argparse.py", line 1590, in _add_action
action = super(_ArgumentGroup, self)._add_action(action)
File "c:\program files\python38\lib\argparse.py", line 1400, in _add_action
self._check_conflict(action)
File "c:\program files\python38\lib\argparse.py", line 1539, in _check_conflict
conflict_handler(action, confl_optionals)
File "c:\program files\python38\lib\argparse.py", line 1548, in _handle_conflict_error
raise ArgumentError(action, message % conflict_string)
argparse.ArgumentError: argument --remote: conflicting option string: --remote
Desktop (please complete the following information):
Object: turn the field TEST_SESSIONS.RUN_DESCRIPTION from varchar into JSON
Motivation:
This field already holds a json dump of both tags and description. This technical change would allow easier handling of these data and considerably help in processing them.
Impacts:
This breaks the compatibilty with existing database. A check can be added to avoid corrupting an existing database.
Describe the bug
After installing pytest-monitor
to plug it inside https://github.com/PyFPDF/fpdf2 a stacktrace was raised when calling pytest
To Reproduce
Steps to reproduce the behavior:
pip install --upgrade . pytest-monitor -r test/requirements.txt
in the cloned repo directorypytest
Expected behavior
No error
Stacktrace
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/_pytest/main.py", line 266, in wrap_session
INTERNALERROR> config.hook.pytest_sessionstart(session=session)
INTERNALERROR> File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
INTERNALERROR> return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
INTERNALERROR> File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/pluggy/_callers.py", line 34, in _multicall
INTERNALERROR> next(gen) # first yield
INTERNALERROR> File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/pytest_monitor/pytest_monitor.py", line 194, in pytest_sessionstart
INTERNALERROR> session.pytest_monitor.compute_info(session.config.option.mtr_description,
INTERNALERROR> File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/pytest_monitor/session.py", line 85, in compute_info
INTERNALERROR> self.set_environment_info(ExecutionContext())
INTERNALERROR> File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/pytest_monitor/sys_utils.py", line 72, in __init__
INTERNALERROR> self.__cpu_freq_base = psutil.cpu_freq().current
INTERNALERROR> File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/psutil/__init__.py", line 1857, in cpu_freq
INTERNALERROR> ret = _psplatform.cpu_freq()
INTERNALERROR> File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/psutil/_pslinux.py", line 701, in cpu_freq
INTERNALERROR> raise NotImplementedError(
INTERNALERROR> NotImplementedError: can't find current frequency file
Desktop (please complete the following information):
We should use black and flake8 tools to leverage code quality.
Ideally, all commit must respect that using pre-commit hooks.
The way the memory usage is collected does not take care of the garbage collector.
The idea here is to provide two modes:
An option should be used to control this feature.
Describe the bug
When running pytest-monitor with option --remote-server, I am unable to send measures to the remote server. I got the warning Cannot insert session to remote monitor server (BAD REQUEST)
To Reproduce
Using monitor-server-api as the remote server.
Here is the super simple Flask-MongoDB backend for your plugin.
Open to any comments :)
Remove all support to python<=3.7 and pytest<=5
Switch to pyproject toml
Describe the bug
Monitoring results for failed tests are missing from .pymon
.
To Reproduce
# test.py
def test_1():
assert 1 == 1
def test_2():
assert 1 == 2
pytest test.py
.pymon
has only 1 entry for test_1
in TEST_METRICS
table.Expected behavior
Unless I missed something in the documentation, I expect metrics to be reported for failed tests as well.
Desktop (please complete the following information):
ITEM holds the function name using a pattern which separates modules from the function name.
It would be better if the ITEM holds the full function path using '.' as the only separator.
Describe the bug
When a fixture skips via pytest.skip("some reason")
, teardown fails for pytest-monitor with AttributeError: 'Function' object has no attribute 'monitor_results'.
To Reproduce
https://replit.com/@altendky/SwiftCrimsonDownload-1
import subprocess
import pytest
@pytest.fixture
def a_fixture():
pytest.skip("because i want to show the issue")
def test_skipping(a_fixture):
pass
def main():
subprocess.run(["pytest", __file__])
if __name__ == "__main__":
main()
============================= test session starts ==============================
platform linux -- Python 3.8.12, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /home/runner/SwiftCrimsonDownload
plugins: monitor-1.6.2
collected 1 item
main.py sE [100%]
==================================== ERRORS ====================================
______________________ ERROR at teardown of test_skipping ______________________
request = <SubRequest 'prf_tracer' for <Function test_skipping>>
@pytest.fixture(autouse=True)
def prf_tracer(request):
if not PYTEST_MONITORING_ENABLED:
yield
else:
ptimes_a = request.session.pytest_monitor.process.cpu_times()
yield
ptimes_b = request.session.pytest_monitor.process.cpu_times()
> if not request.node.monitor_skip_test and request.node.monitor_results:
E AttributeError: 'Function' object has no attribute 'monitor_results'
/opt/virtualenvs/python3/lib/python3.8/site-packages/pytest_monitor/pytest_monitor.py:230: AttributeError
=========================== short test summary info ============================
ERROR main.py::test_skipping - AttributeError: 'Function' object has no attri...
========================= 1 skipped, 1 error in 0.13s ==========================
Expected behavior
No exception raised for this case.
Screenshots
Output shared as text above.
Desktop (please complete the following information):
Additional context
Template issues corrected in #49.
Naive "fix" submitted in #51.
Is your feature request related to a problem? Please describe.
Inspecting the database after every test-run to get the output of pytest-monitor seems to be quite a hassle.
Describe the solution you'd like
Looking at pytest-benchmark, they write a summary of the output to the console, see screenshot below.
Describe alternatives you've considered
None.
Additional context
None.
Describe the bug
with Perforce source control
To Reproduce
Steps to reproduce the behavior:
Expected behavior
ignore commit message retrieving commit message
Stack trace
INTERNALERROR> File "/xxxxxxxxxxxxxxxxxxxxx/lib/python3.6/site-packages/pytest_monitor/sys_utils.py", line 42, in determine_scm_revision
INTERNALERROR> return p_out.decode().split('\n')[0]
INTERNALERROR> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 78: invalid continuation byte
Desktop (please complete the following information):
The bug
In the function below the plugin calculated that memory usage is less than one MB. However, when I print the actual size of the array it is between 7 and 8 MB. I assume it is related to the memory_profiler
, because I experienced similar result when using it.
def test_extension():
ROWS = 1000000000
data = np.empty(ROWS)
with open('temp.txt', 'w') as out:
out.write(str(sys.getsizeof(data)))
assert True
Desktop:
Describe the solution you'd like
We'd like to keep track of different pipeline results in a dedicated database.
Describe alternatives you've considered
Additional context
I got a working prototype (see #78) but would like to discuss the following points:
pip install pytest-monitor[postgres]
function+module analysis scope is too large and for most of the cases, function analytics is enough. We should consider reviewing the default behaviour.
After installing pytest-monitor, and running pytest, I get an error:
pip install pytest-monitor
pytest
Stack trace:
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/_pytest/main.py", line 266, in wrap_session
INTERNALERROR> config.hook.pytest_sessionstart(session=session)
INTERNALERROR> File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
INTERNALERROR> return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
INTERNALERROR> File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/pluggy/_callers.py", line 34, in _multicall
INTERNALERROR> next(gen) # first yield
INTERNALERROR> File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/pytest_monitor/pytest_monitor.py", line 194, in pytest_sessionstart
INTERNALERROR> session.pytest_monitor.compute_info(session.config.option.mtr_description,
INTERNALERROR> File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/pytest_monitor/session.py", line 85, in compute_info
INTERNALERROR> self.set_environment_info(ExecutionContext())
INTERNALERROR> File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/pytest_monitor/sys_utils.py", line 72, in __init__
INTERNALERROR> self.__cpu_freq_base = psutil.cpu_freq().current
INTERNALERROR> File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/psutil/__init__.py", line 1864, in cpu_freq
INTERNALERROR> ret = _psplatform.cpu_freq()
INTERNALERROR> File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/psutil/_psosx.py", line 179, in cpu_freq
INTERNALERROR> curr, min_, max_ = cext.cpu_freq()
INTERNALERROR> FileNotFoundError: [Errno 2] No such file or directory (originated from sysctl(HW_CPU_FREQ))
Desktop:
Describe the bug
Consider the following snippet:
def run(a, b):
"""
>>> run(3, 30)
33
"""
return a + b
Running pytest with --no-monitor cause the test to fail. If the option is unset, the test still fail
Expected behavior
doctests should be run without monitoring.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.