Giter VIP home page Giter VIP logo

pytest-monitor's Introduction

Pytest-Monitor


pytest-monitor

Documentation Status

PyPI version

Python versions

See Build Status on Circle CI

image

image

image

Pytest-monitor is a pytest plugin designed for analyzing resource usage.


Features

  • Analyze your resources consumption through test functions:
    • memory consumption
    • time duration
    • CPU usage
  • Keep a history of your resource consumption measurements.
  • Compare how your code behaves between different environments.

Usage

Simply run pytest as usual: pytest-monitor is active by default as soon as it is installed. After running your first session, a .pymon sqlite database will be accessible in the directory where pytest was run.

Example of information collected for the execution context:

ENV_H CPU_COUNT CPU_FREQUENCY_MHZ CPU_TYPE CPU_VENDOR RAM_TOTAL_MB MACHINE_NODE MACHINE_TYPE MACHINE_ARCH SYSTEM_INFO PYTHON_INFO

8294b1326007d9f4c8a1680f9590c23d

36

3000

x86_64

Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz

772249

some.host.vm.fr

x86_64

64bit

Linux - 3.10.0-693.el7.x86_64 3.6.8 (default, Jun 28 2019, 11:09:04) n[GCC ...

Here is an example of collected data stored in the result database:

RUN_DATE ENV_H SCM_ID ITEM_START_TIME ITEM KIND COMPONENT TOTAL_TIME USER_TIME KERNEL_TIME CPU_USAGE MEM_USAGE

2020-02-17T09:11:36.731233

8294b1326007d9f4c8a1680f9590c23d de23e6bdb987ae21e84e6c7c0357488ee66f2639 2020-02-17T09:11:36.890477

pkg1.test_mod1/test_sleep1

function

None

1.005669

0.54

0.06

0.596618

1.781250

2020-02-17T09:11:36.731233

8294b1326007d9f4c8a1680f9590c23d de23e6bdb987ae21e84e6c7c0357488ee66f2639 2020-02-17T09:11:39.912029

pkg1.test_mod1/test_heavy[10-10]

function

None

0.029627

0.55

0.08

21.264498

1.781250

2020-02-17T09:11:36.731233

8294b1326007d9f4c8a1680f9590c23d de23e6bdb987ae21e84e6c7c0357488ee66f2639 2020-02-17T09:11:39.948922

pkg1.test_mod1/test_heavy[100-100]

function

None

0.028262

0.56

0.09

22.998773

1.781250

2020-02-17T09:11:36.731233

8294b1326007d9f4c8a1680f9590c23d de23e6bdb987ae21e84e6c7c0357488ee66f2639 2020-02-17T09:11:39.983869

pkg1.test_mod1/test_heavy[1000-1000]

function

None

0.030131

0.56

0.10

21.904277

2.132812

2020-02-17T09:11:36.731233

8294b1326007d9f4c8a1680f9590c23d de23e6bdb987ae21e84e6c7c0357488ee66f2639 2020-02-17T09:11:40.020823 pkg1.test_mod1/test_heavy[10000-10000] function

None

0.060060

0.57

0.14

11.821601

41.292969

2020-02-17T09:11:36.731233

8294b1326007d9f4c8a1680f9590c23d de23e6bdb987ae21e84e6c7c0357488ee66f2639 2020-02-17T09:11:40.093490

pkg1.test_mod2/test_sleep_400ms

function

None

0.404860

0.58

0.15

1.803093

2.320312

2020-02-17T09:11:36.731233

8294b1326007d9f4c8a1680f9590c23d de23e6bdb987ae21e84e6c7c0357488ee66f2639 2020-02-17T09:11:40.510525

pkg2.test_mod_a/test_master_sleep

function

None

5.006039

5.57

0.15

1.142620

2.320312

2020-02-17T09:11:36.731233

8294b1326007d9f4c8a1680f9590c23d de23e6bdb987ae21e84e6c7c0357488ee66f2639 2020-02-17T09:11:45.530780

pkg3.test_mod_cl/test_method1

function

None

0.030505

5.58

0.16

188.164762

2.320312

2020-02-17T09:11:36.731233

8294b1326007d9f4c8a1680f9590c23d de23e6bdb987ae21e84e6c7c0357488ee66f2639 2020-02-17T09:11:50.582954

pkg4.test_mod_a/test_force_monitor

function

test

1.005015

11.57

0.17

11.681416 2.320312

Documentation

A full documentation is available.

Installation

You can install pytest-monitor via conda (through the conda-forge channel):

$ conda install pytest-monitor -c https://conda.anaconda.org/conda-forge

Another possibility is to install pytest-monitor via pip from PyPI:

$ pip install pytest-monitor

Requirements

You will need a valid Python 3.5+ interpreter. To get measures, we rely on:

  • psutil to extract CPU usage
  • memory_profiler to collect memory usage
  • and pytest (obviously!)

Note: this plugin doesn't work with unittest

Storage backends

By default, pytest-monitor stores its result in a local SQLite3 local database, making results accessible. If you need a more powerful way to analyze your results, checkout the monitor-server-api which brings both a REST Api for storing and historize your results and an API to query your data. An alternative service (using MongoDB) can be used thanks to a contribution from @dremdem: pytest-monitor-backend.

Contributing

Contributions are very welcome. Tests can be run with tox. Before submitting a pull request, please ensure that:

  • both internal tests and examples are passing.
  • internal tests have been written if necessary.
  • if your contribution provides a new feature, make sure to provide an example and update the documentation accordingly.

License

This code is distributed under the MIT license. pytest-monitor is free, open-source software.

Issues

If you encounter any problem, please file an issue along with a detailed description.

Author

The main author of pytest-monitor is Jean-Sébastien Dieu, who can be reached at [email protected].


This pytest plugin was generated with Cookiecutter along with @hackebrot's cookiecutter-pytest-plugin template.

pytest-monitor's People

Contributors

altendky avatar eldar-mustafayev avatar jdhalimi avatar jonashaag avatar jraygauthier avatar js-dieu avatar lebigot avatar marksmayo avatar stas00 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytest-monitor's Issues

Provide an option to force garbage collector to run between tests

The way the memory usage is collected does not take care of the garbage collector.
The idea here is to provide two modes:

  • The first one is the current one: garbage is managed by python during tests execution
  • the second one uses a stricter approach by enforcing the garbage collector to be run between two tests.

An option should be used to control this feature.

Using sqlite's JSON field to store JSON data instead of text.

Object: turn the field TEST_SESSIONS.RUN_DESCRIPTION from varchar into JSON

Motivation:
This field already holds a json dump of both tags and description. This technical change would allow easier handling of these data and considerably help in processing them.

Impacts:
This breaks the compatibilty with existing database. A check can be added to avoid corrupting an existing database.

Got `AttributeError` when start `pytest` session

Describe the bug
Got INTERNALERROR> AttributeError: 'ModuleWrapper' object has no attribute 'cpu_freq' when start pytest
I guess psutil has diffrent methods for different versions 😞
And need just to pin a minimal version which plugin supports
psutil.cpu_freq() was added in 5.1.0

To Reproduce

  1. Install the latest version of pytest-monitor pip3 install git+https://github.com/CFMTech/pytest-monitor.git
  2. Run python3 -m pytest -lvv tests/backend/unit/

Expected behavior
run tests with pytest-monitor plugin

Exeption

INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "python3.7/site-packages/_pytest/main.py", line 105, in wrap_session
INTERNALERROR>     config.hook.pytest_sessionstart(session=session)
INTERNALERROR>   File "/python3.7/site-packages/pluggy/__init__.py", line 617, in __call__
INTERNALERROR>     return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
INTERNALERROR>   File "/python3.7/site-packages/pluggy/__init__.py", line 222, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "/python3.7/site-packages/pluggy/__init__.py", line 216, in <lambda>
INTERNALERROR>     firstresult=hook.spec_opts.get('firstresult'),
INTERNALERROR>   File "/python3.7/site-packages/pluggy/callers.py", line 201, in _multicall
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "/python3.7/site-packages/pluggy/callers.py", line 76, in get_result
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "/python3.7/site-packages/pluggy/callers.py", line 175, in _multicall
INTERNALERROR>     next(gen)   # first yield
INTERNALERROR>   File "/python3.7/site-packages/pytest_monitor/pytest_monitor.py", line 174, in pytest_sessionstart
INTERNALERROR>     session.config.option.mtr_tags)
INTERNALERROR>   File "/python3.7/site-packages/pytest_monitor/session.py", line 80, in compute_info
INTERNALERROR>     self.set_environment_info(ExecutionContext())
INTERNALERROR>   File "/python3.7/site-packages/pytest_monitor/sys_utils.py", line 39, in __init__
INTERNALERROR>     self.__cpu_freq_base = psutil.cpu_freq().current
INTERNALERROR> AttributeError: 'ModuleWrapper' object has no attribute 'cpu_freq'

Desktop (please complete the following information):

  • OS: macOs
  • Version 10.15.5 (19F101)

Additional context

psutil                                2.1.1

Use http code from standard http package

Is your feature request related to a problem? Please describe.
To ease maintainance and for code clarity we should use HTTP Code from the http.HTTPStatus enumeration when sending results to a monitor server.

Describe the solution you'd like
Use http.HTTPStatus enumeration

Describe alternatives you've considered
Hardcoding http code is not a good solution as it does not allow clear understanding between implementation.

Crash on __init__ if psutil.cpu_freq() gives no results

Describe the bug
When running on a system whose architecture is such that psutil.cpu_freq() returns None, pytest-monitor crashes pytest on startup:

AttributeError: 'NoneType' object has no attribute 'current'

To Reproduce

An example of an architecture with this problem is 5.10.104-linuxkit, the version used by the Docker image python:3.9-slim-buster.

As of the current version 5.9.0, psutil tries to find the CPU service in /sys/devices/system/cpu/cpufreq/policy0 or /sys/devices/system/cpu/cpu0/cpufreq; if it can't find it there, it checks /proc/cpuinfo for lines starting with cpu mhz. But that doesn't always work (in my case the CPU was given only in BogoMIPS). In that case it returns None, triggering this crash.

Expected behavior

Prefer that pytest-monitor fails gracefully in this case, or perhaps defaults CPU utilization to zero and emits a warning.

Desktop (please complete the following information):

  • OS: 5.10.104-linuxkit
  • Python version: 3.9.12
  • Pytest version: 6.2.1
  • pytest-monitor version: 1.6.3

Support for newer python versions

The python badge in the readme still reports that 3.8 is the most recent support version.

image

Please add support for all released python version and maybe drop support for older releases (some open-source projects require at least python 3.7)

Automatic gathering of a CI build information

Object: gathering metric from CI should be automatic

Motivation:
Currently, if we want to collect build information (pipeline, build no, eventually URL), this must be done by hand. It would be easier if pytest-monitor could handle the following CI automagically:

  • Jenkins
  • Circle CI
  • Travis CI
  • GitLab CI
  • Drone CI

unicode issue in determine_scm_revision with Perforce

Describe the bug

with Perforce source control

  • pytest-monitor crashes if non ASCII / Unicode character is present in commit log under Perforce
  • incorrect scm_revision : does not extract p4 change number only but all commit message

To Reproduce
Steps to reproduce the behavior:

  1. use Perforce SCM
  2. put an accent / non ASCII character in commit message
  3. run pytest using pytest monitor

Expected behavior
ignore commit message retrieving commit message

Stack trace

INTERNALERROR>   File "/xxxxxxxxxxxxxxxxxxxxx/lib/python3.6/site-packages/pytest_monitor/sys_utils.py", line 42, in determine_scm_revision
INTERNALERROR>     return p_out.decode().split('\n')[0]
INTERNALERROR> UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 78: invalid continuation byte

Desktop (please complete the following information):

  • OS: Linux
  • Python version: 3.6
  • Pytest version: [e.g. 6.2.3]
  • pytest-monitor version: [e.g. 1.6.2]

Use json format to describe a session

We have the --description parameter which helps us in providing a meaningfull description of a run.
It would be of help to transform this field in a json string to ease post run analysis filtering.
Example:

pytest --description "my description" --tag pandas=1.0.1 --tag pipeline_branch=master

Such command line would produce the following JSON string:

"{"description": "my description", "pandas":"1.0.1", "pipeline_branch":"master"}

This can clearly helps in filtering out sessions using some keys/values of the JSON String.

Add Bitbucket CI details to

Is your feature request related to a problem? Please describe.
I'd like to save the information that is provided by the Bitbucket CI to the database of test runs.

Describe the solution you'd like
pytest-monitor should automatically detect the branch name and build number provided by the bitbucket CI

Describe alternatives you've considered
No alternative considered.

I've already implemented the required changes and will raise a PR to close this issue.

Ability to add a tag/description to a run

When comparing stacks for the same SCM_ID, it is hard to distinguish sets of measures. Extending the SCM_ID or adding a DESCRIPTION field can be a way to add some semantic to a pytest run.

Using the ITEM_START_TIME or RUN_DATE is insufficient if measures are taken on two equivalent ExecutionContexts at the same time.

NotImplementedError: can't find current frequency file

Describe the bug
After installing pytest-monitor to plug it inside https://github.com/PyFPDF/fpdf2 a stacktrace was raised when calling pytest

To Reproduce
Steps to reproduce the behavior:

  1. Clone https://github.com/PyFPDF/fpdf2
  2. Run pip install --upgrade . pytest-monitor -r test/requirements.txt in the cloned repo directory
  3. Run pytest

Expected behavior
No error

Stacktrace

INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/_pytest/main.py", line 266, in wrap_session
INTERNALERROR>     config.hook.pytest_sessionstart(session=session)
INTERNALERROR>   File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
INTERNALERROR>     return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
INTERNALERROR>   File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR>   File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/pluggy/_callers.py", line 34, in _multicall
INTERNALERROR>     next(gen)  # first yield
INTERNALERROR>   File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/pytest_monitor/pytest_monitor.py", line 194, in pytest_sessionstart
INTERNALERROR>     session.pytest_monitor.compute_info(session.config.option.mtr_description,
INTERNALERROR>   File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/pytest_monitor/session.py", line 85, in compute_info
INTERNALERROR>     self.set_environment_info(ExecutionContext())
INTERNALERROR>   File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/pytest_monitor/sys_utils.py", line 72, in __init__
INTERNALERROR>     self.__cpu_freq_base = psutil.cpu_freq().current
INTERNALERROR>   File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/psutil/__init__.py", line 1857, in cpu_freq
INTERNALERROR>     ret = _psplatform.cpu_freq()
INTERNALERROR>   File "/home/user/.local/share/virtualenvs/fpdf2/lib/python3.8/site-packages/psutil/_pslinux.py", line 701, in cpu_freq
INTERNALERROR>     raise NotImplementedError(
INTERNALERROR> NotImplementedError: can't find current frequency file

Desktop (please complete the following information):

  • OS: Linux (Ubuntu 20.04.4 LTS through WSL)
  • Python version: 3.8
  • Pytest version: 7.1.2
  • pytest-monitor version: 1.6.4

creating/binding a socket in a fixture causes it to not close

Describe the bug
While creating and binding a socket in a test and then closing it allows the same port to be reopened later in the test, if the socket was originally created and bound in a fixture then it fails claiming OSError: [Errno 98] Address already in use.

To Reproduce
Complete input and output below, but the tl;dr is...

================================================= test session starts ==================================================
platform linux -- Python 3.9.5, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /home/altendky/repos/chia-blockchain
plugins: monitor-1.6.3
collected 2 items                                                                                                      

x.py F.                                                                                                          [100%]

======================================================= FAILURES =======================================================
________________________________________________ test_a_socket[fixture] ________________________________________________

just_a_socket = ('fixture', <socket.socket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0>)

    def test_a_socket(just_a_socket):
        where, sock = just_a_socket
        if where == "test":
            print("creating and binding in test")
            sock = make_a_bound_socket()
    
        sock.close()
    
        sock2 = socket.socket()
>       sock2.bind(address)
E       OSError: [Errno 98] Address already in use

x.py:37: OSError
------------------------------------------------ Captured stdout setup -------------------------------------------------
creating and binding in fixture
=============================================== short test summary info ================================================
FAILED x.py::test_a_socket[fixture] - OSError: [Errno 98] Address already in use
============================================= 1 failed, 1 passed in 0.09s ==============================================
import socket
import subprocess

import pytest


address = ("127.0.0.1", 33125)

def make_a_bound_socket():
    sock = socket.socket()
    sock.bind(address)
    return sock


@pytest.fixture(params=["fixture", "test"])
def just_a_socket(request):
    where = request.param

    if where == "fixture":
        print("creating and binding in fixture")
        sock = make_a_bound_socket()
    else:
        sock = None

    yield where, sock


def test_a_socket(just_a_socket):
    where, sock = just_a_socket
    if where == "test":
        print("creating and binding in test")
        sock = make_a_bound_socket()

    sock.close()

    sock2 = socket.socket()
    sock2.bind(address)

https://gist.github.com/altendky/656bd59340288be28e8e3eb9a8d6f1b8

Copy/paste this into a terminal in an empty directory on a system with Python 3.9 available
cat > x.py << EOF
import socket
import subprocess

import pytest


address = ("127.0.0.1", 33125)

def make_a_bound_socket():
    sock = socket.socket()
    sock.bind(address)
    return sock


@pytest.fixture(params=["fixture", "test"])
def just_a_socket(request):
    where = request.param

    if where == "fixture":
        print("creating and binding in fixture")
        sock = make_a_bound_socket()
    else:
        sock = None

    yield where, sock


def test_a_socket(just_a_socket):
    where, sock = just_a_socket
    if where == "test":
        print("creating and binding in test")
        sock = make_a_bound_socket()

    sock.close()

    sock2 = socket.socket()
    sock2.bind(address)


def main():
    subprocess.run(["pytest", "--capture", "no", __file__], check=True)


# yuck
if __name__ == "__main__":
    main()
EOF
cat x.py
python3.9 -m venv venv
venv/bin/python --version --version
venv/bin/python -m pip install --upgrade pip setuptools wheel
venv/bin/pip install attrs==21.2.0 iniconfig==1.1.1 packaging==21.3 pluggy==1.0.0 py==1.11.0 pyparsing==3.0.6 pytest==6.2.5 toml==0.10.2
venv/bin/pip freeze
venv/bin/pytest x.py
venv/bin/pip install attrs==21.2.0 certifi==2021.10.8 charset-normalizer==2.0.9 idna==3.3 iniconfig==1.1.1 memory-profiler==0.60.0 packaging==21.3 pluggy==1.0.0 psutil==5.8.0 py==1.11.0 pyparsing==3.0.6 pytest==6.2.5 pytest-monitor==1.6.3 requests==2.26.0 toml==0.10.2 urllib3==1.26.7
venv/bin/pip freeze
venv/bin/pytest x.py
uname -a
lsb_release -a
To get this session
$ cat > x.py << EOF
> import socket
> import subprocess
> 
> import pytest
> 
> 
> address = ("127.0.0.1", 33125)
> 
> def make_a_bound_socket():
>     sock = socket.socket()
>     sock.bind(address)
>     return sock
> 
> 
> @pytest.fixture(params=["fixture", "test"])
> def just_a_socket(request):
>     where = request.param
> 
>     if where == "fixture":
>         print("creating and binding in fixture")
>         sock = make_a_bound_socket()
>     else:
>         sock = None
> 
>     yield where, sock
> 
> 
> def test_a_socket(just_a_socket):
>     where, sock = just_a_socket
>     if where == "test":
>         print("creating and binding in test")
>         sock = make_a_bound_socket()
> 
>     sock.close()
> 
>     sock2 = socket.socket()
>     sock2.bind(address)
> 
> 
> def main():
>     subprocess.run(["pytest", "--capture", "no", __file__], check=True)
> 
> 
> # yuck
> if __name__ == "__main__":
>     main()
> EOF
$ cat x.py
import socket
import subprocess

import pytest


address = ("127.0.0.1", 33125)

def make_a_bound_socket():
    sock = socket.socket()
    sock.bind(address)
    return sock


@pytest.fixture(params=["fixture", "test"])
def just_a_socket(request):
    where = request.param

    if where == "fixture":
        print("creating and binding in fixture")
        sock = make_a_bound_socket()
    else:
        sock = None

    yield where, sock


def test_a_socket(just_a_socket):
    where, sock = just_a_socket
    if where == "test":
        print("creating and binding in test")
        sock = make_a_bound_socket()

    sock.close()

    sock2 = socket.socket()
    sock2.bind(address)


def main():
    subprocess.run(["pytest", "--capture", "no", __file__], check=True)


# yuck
if __name__ == "__main__":
    main()
$ python3.9 -m venv venv
$ venv/bin/python --version --version
Python 3.9.5 (default, Jun  3 2021, 15:18:23) 
[GCC 9.3.0]
$ venv/bin/python -m pip install --upgrade pip setuptools wheel
Requirement already satisfied: pip in ./venv/lib/python3.9/site-packages (21.1.1)
Collecting pip
  Using cached pip-21.3.1-py3-none-any.whl (1.7 MB)
Requirement already satisfied: setuptools in ./venv/lib/python3.9/site-packages (56.0.0)
Collecting setuptools
  Using cached setuptools-60.1.0-py3-none-any.whl (952 kB)
Collecting wheel
  Using cached wheel-0.37.1-py2.py3-none-any.whl (35 kB)
Installing collected packages: wheel, setuptools, pip
  Attempting uninstall: setuptools
    Found existing installation: setuptools 56.0.0
    Uninstalling setuptools-56.0.0:
      Successfully uninstalled setuptools-56.0.0
  Attempting uninstall: pip
    Found existing installation: pip 21.1.1
    Uninstalling pip-21.1.1:
      Successfully uninstalled pip-21.1.1
Successfully installed pip-21.3.1 setuptools-60.1.0 wheel-0.37.1
$ venv/bin/pip install attrs==21.2.0 iniconfig==1.1.1 packaging==21.3 pluggy==1.0.0 py==1.11.0 pyparsing==3.0.6 pytest==6.2.5 toml==0.10.2
Collecting attrs==21.2.0
  Using cached attrs-21.2.0-py2.py3-none-any.whl (53 kB)
Collecting iniconfig==1.1.1
  Using cached iniconfig-1.1.1-py2.py3-none-any.whl (5.0 kB)
Collecting packaging==21.3
  Using cached packaging-21.3-py3-none-any.whl (40 kB)
Collecting pluggy==1.0.0
  Using cached pluggy-1.0.0-py2.py3-none-any.whl (13 kB)
Collecting py==1.11.0
  Using cached py-1.11.0-py2.py3-none-any.whl (98 kB)
Collecting pyparsing==3.0.6
  Using cached pyparsing-3.0.6-py3-none-any.whl (97 kB)
Collecting pytest==6.2.5
  Using cached pytest-6.2.5-py3-none-any.whl (280 kB)
Collecting toml==0.10.2
  Using cached toml-0.10.2-py2.py3-none-any.whl (16 kB)
Installing collected packages: pyparsing, toml, py, pluggy, packaging, iniconfig, attrs, pytest
Successfully installed attrs-21.2.0 iniconfig-1.1.1 packaging-21.3 pluggy-1.0.0 py-1.11.0 pyparsing-3.0.6 pytest-6.2.5 toml-0.10.2
$ venv/bin/pip freeze
attrs==21.2.0
iniconfig==1.1.1
packaging==21.3
pluggy==1.0.0
py==1.11.0
pyparsing==3.0.6
pytest==6.2.5
toml==0.10.2
$ venv/bin/pytest x.py
================================================= test session starts ==================================================
platform linux -- Python 3.9.5, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /home/altendky/repos/chia-blockchain
collected 2 items                                                                                                      

x.py ..                                                                                                          [100%]

================================================== 2 passed in 0.01s ===================================================
$ venv/bin/pip install attrs==21.2.0 certifi==2021.10.8 charset-normalizer==2.0.9 idna==3.3 iniconfig==1.1.1 memory-profiler==0.60.0 packaging==21.3 pluggy==1.0.0 psutil==5.8.0 py==1.11.0 pyparsing==3.0.6 pytest==6.2.5 pytest-monitor==1.6.3 requests==2.26.0 toml==0.10.2 urllib3==1.26.7
Requirement already satisfied: attrs==21.2.0 in ./venv/lib/python3.9/site-packages (21.2.0)
Collecting certifi==2021.10.8
  Using cached certifi-2021.10.8-py2.py3-none-any.whl (149 kB)
Collecting charset-normalizer==2.0.9
  Using cached charset_normalizer-2.0.9-py3-none-any.whl (39 kB)
Collecting idna==3.3
  Using cached idna-3.3-py3-none-any.whl (61 kB)
Requirement already satisfied: iniconfig==1.1.1 in ./venv/lib/python3.9/site-packages (1.1.1)
Collecting memory-profiler==0.60.0
  Using cached memory_profiler-0.60.0-py3-none-any.whl
Requirement already satisfied: packaging==21.3 in ./venv/lib/python3.9/site-packages (21.3)
Requirement already satisfied: pluggy==1.0.0 in ./venv/lib/python3.9/site-packages (1.0.0)
Collecting psutil==5.8.0
  Using cached psutil-5.8.0-cp39-cp39-manylinux2010_x86_64.whl (293 kB)
Requirement already satisfied: py==1.11.0 in ./venv/lib/python3.9/site-packages (1.11.0)
Requirement already satisfied: pyparsing==3.0.6 in ./venv/lib/python3.9/site-packages (3.0.6)
Requirement already satisfied: pytest==6.2.5 in ./venv/lib/python3.9/site-packages (6.2.5)
Collecting pytest-monitor==1.6.3
  Using cached pytest_monitor-1.6.3-py3-none-any.whl (14 kB)
Collecting requests==2.26.0
  Using cached requests-2.26.0-py2.py3-none-any.whl (62 kB)
Requirement already satisfied: toml==0.10.2 in ./venv/lib/python3.9/site-packages (0.10.2)
Collecting urllib3==1.26.7
  Using cached urllib3-1.26.7-py2.py3-none-any.whl (138 kB)
Requirement already satisfied: wheel in ./venv/lib/python3.9/site-packages (from pytest-monitor==1.6.3) (0.37.1)
Installing collected packages: urllib3, psutil, idna, charset-normalizer, certifi, requests, memory-profiler, pytest-monitor
Successfully installed certifi-2021.10.8 charset-normalizer-2.0.9 idna-3.3 memory-profiler-0.60.0 psutil-5.8.0 pytest-monitor-1.6.3 requests-2.26.0 urllib3-1.26.7
$ venv/bin/pip freeze
attrs==21.2.0
certifi==2021.10.8
charset-normalizer==2.0.9
idna==3.3
iniconfig==1.1.1
memory-profiler==0.60.0
packaging==21.3
pluggy==1.0.0
psutil==5.8.0
py==1.11.0
pyparsing==3.0.6
pytest==6.2.5
pytest-monitor==1.6.3
requests==2.26.0
toml==0.10.2
urllib3==1.26.7
$ venv/bin/pytest x.py
================================================= test session starts ==================================================
platform linux -- Python 3.9.5, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /home/altendky/repos/chia-blockchain
plugins: monitor-1.6.3
collected 2 items                                                                                                      

x.py F.                                                                                                          [100%]

======================================================= FAILURES =======================================================
________________________________________________ test_a_socket[fixture] ________________________________________________

just_a_socket = ('fixture', <socket.socket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0>)

    def test_a_socket(just_a_socket):
        where, sock = just_a_socket
        if where == "test":
            print("creating and binding in test")
            sock = make_a_bound_socket()
    
        sock.close()
    
        sock2 = socket.socket()
>       sock2.bind(address)
E       OSError: [Errno 98] Address already in use

x.py:37: OSError
------------------------------------------------ Captured stdout setup -------------------------------------------------
creating and binding in fixture
=============================================== short test summary info ================================================
FAILED x.py::test_a_socket[fixture] - OSError: [Errno 98] Address already in use
============================================= 1 failed, 1 passed in 0.09s ==============================================
$ uname -a
Linux p1 5.4.0-91-generic #102-Ubuntu SMP Fri Nov 5 16:31:28 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.3 LTS
Release:        20.04
Codename:       focal

Expected behavior
I expect sockets created and bound in a fixture to be able to be closed.

Desktop (please complete the following information):

  • OS: Linux (Ubuntu 20.04.3 LTS)
  • Python version: Python 3.9.5 (default, Jun 3 2021, 15:18:23) [GCC 9.3.0]
  • Pytest version: 6.2.5
  • pytest-monitor version: 1.6.3

Additional context
I probably won't get to it tonight, but I will try to see what I can learn about pytest-monitor towards the end of fixing this. I tested back to pytest-monitor v1.0.0 and it seems to be present throughout.

Unable to send metrics to remote server

Description
The recent delivery of the pytest-monitor does not incorporate the ability to send metrics to a remote server as the interface continue to use the previous data model.

Expected behavior
The new data model must be supported effectively.

Unable to send measures on monitor-server

Describe the bug
When running pytest-monitor with option --remote-server, I am unable to send measures to the remote server. I got the warning Cannot insert session to remote monitor server (BAD REQUEST)

To Reproduce
Using monitor-server-api as the remote server.

Add Test Status Information

When reading dataframes with test information it would be very helpful to have a column indicating if test succeed or failed.
This should be done for each test and consequently for each component in order to bring clarity for data analysis.

No execution contexts get pushed when using a remote server

Describe the bug
Using a remote server, no execution contexts are pushed from the pytest session.

To Reproduce
Steps to reproduce the behavior:

  1. Run pytest with --remote
  2. Once tests are done, check your server: no execution contexts are pushed

Expected behavior
An execution contexts should always be pushed if the current one does not exists.

Running pytest after install pytest-monitor results in `FileNotFoundError: [Errno 2] No such file or directory (originated from sysctl(HW_CPU_FREQ))`

After installing pytest-monitor, and running pytest, I get an error:

pip install pytest-monitor
pytest

Stack trace:

INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/_pytest/main.py", line 266, in wrap_session
INTERNALERROR>     config.hook.pytest_sessionstart(session=session)
INTERNALERROR>   File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__
INTERNALERROR>     return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
INTERNALERROR>   File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR>   File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/pluggy/_callers.py", line 34, in _multicall
INTERNALERROR>     next(gen)  # first yield
INTERNALERROR>   File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/pytest_monitor/pytest_monitor.py", line 194, in pytest_sessionstart
INTERNALERROR>     session.pytest_monitor.compute_info(session.config.option.mtr_description,
INTERNALERROR>   File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/pytest_monitor/session.py", line 85, in compute_info
INTERNALERROR>     self.set_environment_info(ExecutionContext())
INTERNALERROR>   File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/pytest_monitor/sys_utils.py", line 72, in __init__
INTERNALERROR>     self.__cpu_freq_base = psutil.cpu_freq().current
INTERNALERROR>   File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/psutil/__init__.py", line 1864, in cpu_freq
INTERNALERROR>     ret = _psplatform.cpu_freq()
INTERNALERROR>   File "/Users/justinzhao/mambaforge/envs/base38/lib/python3.8/site-packages/psutil/_psosx.py", line 179, in cpu_freq
INTERNALERROR>     curr, min_, max_ = cext.cpu_freq()
INTERNALERROR> FileNotFoundError: [Errno 2] No such file or directory (originated from sysctl(HW_CPU_FREQ))

Desktop:

  • OS: macOS 12.5 monterey, M1 pro
  • Python version: 3.8
  • Pytest version: 7.1.2
  • pytest-monitor version: 1.6.4

pytest.skip() in a fixture causes an AttributeError for monitor_results during teardown

Describe the bug
When a fixture skips via pytest.skip("some reason"), teardown fails for pytest-monitor with AttributeError: 'Function' object has no attribute 'monitor_results'.

To Reproduce
https://replit.com/@altendky/SwiftCrimsonDownload-1

import subprocess

import pytest


@pytest.fixture
def a_fixture():
    pytest.skip("because i want to show the issue")


def test_skipping(a_fixture):
    pass


def main():
    subprocess.run(["pytest", __file__])


if __name__ == "__main__":
    main()
============================= test session starts ==============================
platform linux -- Python 3.8.12, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /home/runner/SwiftCrimsonDownload
plugins: monitor-1.6.2
collected 1 item                                                               

main.py sE                                                               [100%]

==================================== ERRORS ====================================
______________________ ERROR at teardown of test_skipping ______________________

request = <SubRequest 'prf_tracer' for <Function test_skipping>>

    @pytest.fixture(autouse=True)
    def prf_tracer(request):
        if not PYTEST_MONITORING_ENABLED:
            yield
        else:
            ptimes_a = request.session.pytest_monitor.process.cpu_times()
            yield
            ptimes_b = request.session.pytest_monitor.process.cpu_times()
>           if not request.node.monitor_skip_test and request.node.monitor_results:
E           AttributeError: 'Function' object has no attribute 'monitor_results'

/opt/virtualenvs/python3/lib/python3.8/site-packages/pytest_monitor/pytest_monitor.py:230: AttributeError
=========================== short test summary info ============================
ERROR main.py::test_skipping - AttributeError: 'Function' object has no attri...
========================= 1 skipped, 1 error in 0.13s ==========================

Expected behavior
No exception raised for this case.

Screenshots
Output shared as text above.

Desktop (please complete the following information):

  • OS: Linux

Additional context
Template issues corrected in #49.

Naive "fix" submitted in #51.

Add postgres DB handler

Describe the solution you'd like
We'd like to keep track of different pipeline results in a dedicated database.

Describe alternatives you've considered

  • Implementing a server to collect the pipeline results seemed like a bigger overhead as it also includes more infrastructure to maintain compared to just pushing the results into a database.
  • Moving the sqlite database into the CI environment would be another option. This may lead to race conditions, if we have multiple pipelines running in parallel.

Additional context
I got a working prototype (see #78) but would like to discuss the following points:

  • How do we want to test the postgres implementation?
    • Proposal: add postgres image to pipeline to be able to run the tests against a database in docker
  • What version of psychopg (2, 3 or both) do we want to support?
    • I'd propose to support both to be future proof
    • How do we add this to the requirements?
  • Does the implementation proposed match the architecture you envisioned?
  • How to ensure people don't need to install uneccessary dependencies? Maybe we could enable people installing pytest-monitor like this pip install pytest-monitor[postgres]

argparse.ArgumentError: argument --remote: conflicting option string: --remote on start testing session

Description
After starting a test session the plugin crashes. There is no errors while using only pytest.
Log says its an error with argparse.

To Reproduce
Steps to reproduce the behavior:

  1. Run test sessions
    (i have run with "pytest --db ./monitor.db" command)

Log
Traceback (most recent call last):
File "c:\program files\python38\lib\runpy.py", line 193, in run_module_as_main
return run_code(code, main_globals, None,
File "c:\program files\python38\lib\runpy.py", line 86, in run_code
exec(code, run_globals)
File "C:\Program Files\Python38\Scripts\pytest.exe_main
.py", line 7, in
from connexion.decorators.uri_parsing import AlwaysMultiURIParser
File "c:\program files\python38\lib\site-packages_pytest\config_init
.py", line 185, in console_main
code = main()
File "c:\program files\python38\lib\site-packages_pytest\config_init
.py", line 143, in main
config = prepareconfig(args, plugins)
File "c:\program files\python38\lib\site-packages_pytest\config_init
.py", line 318, in _prepareconfig
config = pluginmanager.hook.pytest_cmdline_parse(
File "C:\Users\Asus\AppData\Roaming\Python\Python38\site-packages\pluggy\hooks.py", line 286, in call
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "C:\Users\Asus\AppData\Roaming\Python\Python38\site-packages\pluggy\manager.py", line 93, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "C:\Users\Asus\AppData\Roaming\Python\Python38\site-packages\pluggy\manager.py", line 84, in
self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
File "C:\Users\Asus\AppData\Roaming\Python\Python38\site-packages\pluggy\callers.py", line 203, in multicall
gen.send(outcome)
File "c:\program files\python38\lib\site-packages_pytest\helpconfig.py", line 100, in pytest_cmdline_parse
config: Config = outcome.get_result()
File "C:\Users\Asus\AppData\Roaming\Python\Python38\site-packages\pluggy\callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "C:\Users\Asus\AppData\Roaming\Python\Python38\site-packages\pluggy\callers.py", line 187, in multicall
res = hook_impl.function(*args)
File "c:\program files\python38\lib\site-packages_pytest\config_init
.py", line 1003, in pytest_cmdline_parse
self.parse(args)
File "c:\program files\python38\lib\site-packages_pytest\config_init
.py", line 1283, in parse
self.preparse(args, addopts=addopts)
File "c:\program files\python38\lib\site-packages_pytest\config_init
.py", line 1175, in _preparse
self.known_args_namespace = self._parser.parse_known_args(
File "c:\program files\python38\lib\site-packages_pytest\config\argparsing.py", line 146, in parse_known_args
return self.parse_known_and_unknown_args(args, namespace=namespace)[0]
File "c:\program files\python38\lib\site-packages_pytest\config\argparsing.py", line 155, in parse_known_and_unknown_args
optparser = self._getparser()
File "c:\program files\python38\lib\site-packages_pytest\config\argparsing.py", line 122, in _getparser
arggroup.add_argument(*n, **a)
File "c:\program files\python38\lib\argparse.py", line 1386, in add_argument
return self._add_action(action)
File "c:\program files\python38\lib\argparse.py", line 1590, in _add_action
action = super(_ArgumentGroup, self)._add_action(action)
File "c:\program files\python38\lib\argparse.py", line 1400, in _add_action
self._check_conflict(action)
File "c:\program files\python38\lib\argparse.py", line 1539, in _check_conflict
conflict_handler(action, confl_optionals)
File "c:\program files\python38\lib\argparse.py", line 1548, in _handle_conflict_error
raise ArgumentError(action, message % conflict_string)
argparse.ArgumentError: argument --remote: conflicting option string: --remote

Desktop (please complete the following information):

  • OS: Windows 10 Pro (build 19042.746)
  • Version pytest-monitor 1.5.0
  • Python Version 3.8.2

Write monitor-output of tests to console

Is your feature request related to a problem? Please describe.
Inspecting the database after every test-run to get the output of pytest-monitor seems to be quite a hassle.

Describe the solution you'd like
Looking at pytest-benchmark, they write a summary of the output to the console, see screenshot below.

Describe alternatives you've considered
None.

Additional context
None.

--no-monitor breaks pytest.raises and django_assert_num_queries

Describe the bug
When we add --no-monitor to the pytest.ini like so addopts = --no-monitor, pytest.raises, django_assert_num_queries and django_assert_max_num_queries (the latter two are provided by pytest-django) do not work anymore. They should be raising test failures.

To Reproduce
Steps to reproduce the behavior:
0. (install django, pytest, pytetst-monitor and pytest-django)

  1. create a django project
  2. create an app
  3. create Book model in app
  4. run tests below with --no-monitor => all tests pass
  5. run tests below without --no-monitor => all tests fail
import pytest

from books.models import Book

def test_raise_exception():
    with pytest.raises(Exception):
        x = 1 / 1
        
def test_query_assertion(django_assert_num_queries, db):
    with django_assert_num_queries(0):
        print(Book.objects.all())

def test_max_queries_assertion(django_assert_max_num_queries, db):
    with django_assert_max_num_queries(0):
        print(Book.objects.all())

Expected behavior
The tests should fail, even though the --no-monitor flag was provided.

Desktop (please complete the following information):

  • OS: Ubuntu 22.04
  • Python version: 3.11
  • Pytest version: 8.2.0
  • pytest-monitor version: 1.6.6

Additional context
Please ask if I missed to provide necessary information!

add support for unittests (via pytest)

As discussed here #38 pytest almost fully supports unittest, but currently pytest-monitor skips all unittest-based tests.

I originally proposed to document this, but it was suggested that perhaps this can be fixed and pytest-monitor could support unittest-based tests too, which would be awesome.

So as suggested opening this Issue to track the feasibility/progress on this front.

Collect child processes metrics

We have a project that starts a few child processes for which we run tests against.

Are there any plans to track child processes statistics?

Describe the solution you'd like
Include child processes stats on metrics.

Describe alternatives you've considered
Hack our own stats/metrics collector.

A local database is always created, even with --no-db option

Describe the bug
A local database is always created, even with --no-db option

To Reproduce
Steps to reproduce the behavior:

  1. Execute pytest with pytest --no-db
  2. At the end of the session, you can observe that a local database has been created.

Expected behavior
No local database is created

Missing metrics for failed tests

Describe the bug
Monitoring results for failed tests are missing from .pymon.

To Reproduce

  1. Create some tests:
# test.py

def test_1():
    assert 1 == 1

def test_2():
    assert 1 == 2
  1. Run pytest: pytest test.py
  2. .pymon has only 1 entry for test_1 in TEST_METRICS table.

Expected behavior
Unless I missed something in the documentation, I expect metrics to be reported for failed tests as well.

Desktop (please complete the following information):

  • OS: Linux (Ubuntu 20.04), Windows 10
  • Python version: 3.8, 3.11
  • Pytest version: 6.x, 7.x
  • pytest-monitor version: 1.6.5

Incorrect result

The bug
In the function below the plugin calculated that memory usage is less than one MB. However, when I print the actual size of the array it is between 7 and 8 MB. I assume it is related to the memory_profiler, because I experienced similar result when using it.

def test_extension():
    ROWS = 1000000000
    data = np.empty(ROWS)
    with open('temp.txt', 'w') as out:
        out.write(str(sys.getsizeof(data)))

    assert True

Desktop:

  • OS: Fedora 34
  • pytest-monitor: 1.6.2
  • pytest: 6.2.5
  • python: 3.8.6
  • numpy: 1.21.2

discuss best memory measurement approach and possible leak detection

Splitting off from #38 (comment) to focus the discussion on nuances of memory measurement side of this extension. And also I will bring up some memory leak detection issue, which I obviously don't expect pytest-monitor should support, but perhaps something good will come out of it, just through discussing these issues with knowledgeable devs. I have a real need to try to identify tests with smallish leaks as these leaks add up requiring more RAM than it should. And when running thousands of tests under xdist/multiple pytest workers the memory requirements go up dramatically.

  1. I noticed you aren't doing gc.collect() before taking a new measurement, without which you may get incorrect reports, as most of the time GC doesn't run immediately on python object deletion. At least in all the profilers I have been developing I've been using this approach.

    this was split off into a separate issue: #40

  2. memory_profiler that you use relies on RSS, which I also use mostly but it's not super-reliable if you get memory swapped out. Recently I discovered PSS - proportional set size - via smem via this https://unix.stackexchange.com/a/169129/291728, and it really helped to solve the "process being killed because it used up its cgroups quota" which I used in the analysis of pytest being killed on CI here: huggingface/transformers#11408 (please scroll down to smem discussion in item 4).

    The question is how to measure it in python if it appears to be a better metric, like the C-program smem does.

  3. How can we tell how much memory test A used vs test B, if both tests load the same large module. It'd appear that test A used a lot of memory if it was used first, when in reality it could be far from the truth. So if the tests execution order were to be reversed the results would be completely different. This is especially so in envs where tests are loaded randomly by choice or because it's already so (e.g. xdist) - so for example in the context of pytest-monitor if one were to compare different sessions one would get reports of varying memory, but it'd be false reports.

    One possible solution is to somehow pre-run all the tests so that all the modules and any globals get pre-loaded and measure only the 2nd time these tests are re-run. This would be problem with pytest-xdist, but let's say that for the sake of regression testing, one could run the full test suite with pytest -n 1 just to ensure that the approach is consistent. There are other ways to make the tests run in the same order, but it only works if no new tests are added.

  4. It'd be great to be able to use such plugin for memory leak detection, in which case the first time any test gets loaded the measurement should be discarded and we would expect 0 extra memory usage for 2nd round (well, sometimes 3rd) if there is no leak. So far I've been struggling using RSS to get that 0, I get fluctuations in reported memory.

    I guess this is tightly related to (3) and probably is really the same issue.

  5. Continuing the memory leakage, how could we detect badly written tests, where a test doesn't clean completely up on teardown and leaves hanging objects that may continue consuming significant amounts of memory. This is very difficult to measure, because any test may appear to be such test if it loads some module for the first time.

@js-dieu

Monitor usage resulting in repeated printing

Describe the bug
I use captured print statements to build test reports. In using pytest-monitor, I notice that its usage produces the effect of repeating these statements.

To Reproduce
Steps to reproduce the behavior:

  1. Create a simple hello-world pytest function.
  2. Run the test with the -s flag to allow printing to console, observe multiple statements.
  3. Run again but include -p no:monitor, observe single statement.

Expected behavior
The printing of a single statement.

Screenshots
image

Desktop (please complete the following information):

  • OS: Pop!_OS 20.04LTS
  • Python 3.7.8
  • Pytest 6.0.2
  • Other plugins: metadata, html

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.