se2p / flapy Goto Github PK
View Code? Open in Web Editor NEWA Tool for Mining Flaky Tests at Scale
License: GNU Lesser General Public License v3.0
A Tool for Mining Flaky Tests at Scale
License: GNU Lesser General Public License v3.0
Basically automate the tests described in the README.
For step 1 (running): check if sqlites are being created (not directly, but indirectly by asserting, if we can derive a coverage table (CTA))
For steps 2 and 3: save an existing ResultsDir as a test_resource and apply the parsing steps to it.
I would keep these two parts separate, otherwise the test might become flaky ;-)
https://about.codecov.io/blog/message-regarding-the-pypi-package/
"EDIT 2023-04-17: We have re-instated the codecov package to PyPI as version 2.1.13. We are unfortunately unable to push 2.1.12, and users who still wish to use this package are encouraged to upgrade. We will continue to develop a deprecation plan for this uploader and will announce plans in the upcoming weeks.
On April 12, 2023, Codecov removed the codecov package from PyPI. Our intent was to remove a deprecated, rarely-used package from active support."
• Installing codecov (2.1.12): Failed
RuntimeError
Unable to find installation candidates for codecov (2.1.12)
at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/poetry/installation/chooser.py:109 in choose_for
105│
106│ links.append(link)
107│
108│ if not links:
→ 109│ raise RuntimeError(f"Unable to find installation candidates for {package}")
110│
111│ # Get the best link
112│ chosen = max(links, key=lambda link: self._sort_key(package, link))
I suggest updating codecov to 2.1.13
Following columns contain sets as values:
Passed_sameOrder
, Failed_sameOrder
, ...) of the passed-failed overviewVerdicts_sameOrder
, Verdicts_randomOrder
of the tests-overviewThis is bad, because it makes the resulting CSV output non-deterministic, since the sets have non-deterministic order between different Python executions.
Originally it was used to filter out duplicate result entries (the same test is reported multiple times in the same run), which actually occurs in certain junit-xml files.
Solution: keep the duplicated-dropping set operation, but wrap it in sorted
to convert it back to a list.
Watch out for the following also needed adjustments:
I use the "-it" flags, when running flapy.sh run
locally, so that the user can kill the process using Ctrl-C:
https://github.com/se2p/FlaPy/blob/master/run_line.sh#L86
However, this causes the end-to-end test to fail with the input device is not a TTY
when using docker. For podman it works fine.
=> Remove the "-it" flag, but add some mechanism to make the process killable via Ctr-C
Version: MacOS Ventura 13.4
I think it might have something to do with getopts in flapy.sh
Log messages (private information [omitted])
"./flapy.sh run --out-dir example_results flapy_input_example_tiny.csv
--run-on not specified -> defaulting to 'locally'
-- ./run_csv.sh
Num runs: run
Run on: locally
Constraint:
Input CSV: flapy.sh
Plus random runs:
Core args:
Out-dir:
----
input-csv length: 159
-- ./run_line.sh
input csv file: [omitted]/PycharmProjects/flapy-github-fork/FlaPy/flapy-results_20230613_143551/!flapy.run//input.csv
slurm array task id:
input csv line num: 2
Num runs: run
----
Project name:
Project url:
Project hash:
PyPi tag:
Funcs to trace:
Tests to be run:
cat: /etc/hostname: No such file or directory
-- ./run_container.sh (run_container.sh)
Project name:
Project url:
Project hash:
PyPi tag:
Funcs to trace:
Tests to be run:
Num runs: run
Plus random runs:
Iteration results: [omitted]//PycharmProjects/flapy-github-fork/FlaPy/flapy-results_20230613_143551/_20230613_143551_2
Flapy Args:
-- Prepare for docker command
-- prepare_for_docker_command: Using default setup script
-- Define FlaPy docker image
-- Creating alias 'flapy_docker_command'
-- Pulling FlaPY docker image
-- Node [omitted]
-- Prepare for docker command
-- prepare_for_docker_command: Using default setup script
-- Define FlaPy docker image
-- Creating alias 'flapy_docker_command'
-- Loading image registry.hub.docker.com/gruberma/flapy
2023-06-13T14:35:52+02:00
Using default tag: latest
latest: Pulling from gruberma/flapy
Digest: sha256:9afb8080e80eb7fc9f51ea25b6e6b0499e7a0e3e1f2e93348163c04a4e82c941
Status: Image is up to date for registry.hub.docker.com/gruberma/flapy:latest
registry.hub.docker.com/gruberma/flapy:latest
2023-06-13T14:35:55+02:00
-- Echo image+container info
-- Prepare for docker command
-- prepare_for_docker_command: Using default setup script
-- Define FlaPy docker image
-- Creating alias 'flapy_docker_command'
-- IMAGES
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu_test
..."
The pypi_tag controls how dependencies are installed and was added at some point.
Making it part of the proj_cols allows it to appear in the TestsOverview and therefore makes it easier to feed the TestsOverview back as a new input-csv.
While trying to build FlaPy, I came across an issue when running poetry build
, it was not able to find the whl
Python file. I fixed the problem by manually renaming the file from flapy-0.2.0-py3-none-any.whl
to FlaPy-0.2.0-py3-none-any.whl
.
MacOS Version: Ventura 13.2.1
Dependencies have been installed via poetry install
Pytest has been invoked twice to see if the same test failes twice.
======================================FAILURES ======================================
__________________________________________ test_tracing ____________________________________________
def test_tracing():
generic_test_tracing(FlakyAnalyser)
tests/test_run_tests.py:25:
flaky_analyser = <class 'flapy.run_tests.FlakyAnalyser'>
def generic_test_tracing(flaky_analyser: FlakyAnalyser):
project_name = "test_resources"
out_dir = Path(os.path.dirname(test_output.__file__)) / "run_tests" / "test_tracing"
out_dir.mkdir(parents=True, exist_ok=True)
log_file = out_dir / "execution.log"
# Clean out_dir
for path in out_dir.glob("*"):
if ".gitignore" not in path.name and "__pycache__" not in path.name:
rm_recursively(path)
tmp_dir = tempfile_seeded.mkdtemp()
print(f"Using temporary directory {tmp_dir}")
# fmt: off
args = [
"run_tests.py",
"--logfile", str(log_file.absolute()),
"--repository", os.path.dirname(test_resources.__file__),
"--project-name", project_name,
"--temp", tmp_dir,
"--number-test-runs", "1",
"--tests-to-be-run", "test_trace_me.py",
"--trace",
"test_trace_me.py::test_hashing "
"test_trace_me.py::TestFixtures::test_foo "
"test_trace_me.py::test_random "
"test_trace_me.py::test_path "
"test_trace_me.py::test_super_call",
]
# fmt: on
analyser = flaky_analyser(args)
analyser.run()
results_files = list(Path(tmp_dir).glob("*"))
print(f"Found following results files: {results_files}")
assert len(results_files) > 0
E assert 0 > 0
E + where 0 = len([])
tests/test_run_tests.py:69: AssertionError
------------------------------------------------------------------------------------------------- Captured stdout call -------------------------------------------------------------------------------------------------
Using temporary directory /var/folders/wp/hyj7k6ls33g86nz45c7p0hsr0000gn/T/tmpr25wfuh5
Found following results files: []
------------------------------------------------------------------------------------------------- Captured stderr call -------------------------------------------------------------------------------------------------
INFO: Config: Namespace(logfile='"HERE_MY_PROJECT_PATH"/flapy-github-fork/FlaPy/test_output/run_tests/test_tracing/execution.log', repository='/"HERE_MY_PROJECT_PATH"/flapy-github-fork/FlaPy/test_resources', temp='/var/folders/wp/hyj7k6ls33g86nz45c7p0hsr0000gn/T/tmpr25wfuh5', project_name='test_resources', pypi_tag=None, num_runs=1, random_order_bucket=None, random_order_seed=None, trace='test_trace_me.py::test_hashing test_trace_me.py::TestFixtures::test_foo test_trace_me.py::test_random test_trace_me.py::test_path test_trace_me.py::test_super_call', tests_to_be_run='test_trace_me.py')
INFO: Iteration 0 of 1 for project test_resources (random=None)
INFO: no pypi tag specified -> falling back to searching for requirements
INFO: found the following requirements files: ['/var/folders/wp/hyj7k6ls33g86nz45c7p0hsr0000gn/T/tmpr25wfuh5/tmpvmpbrhox/requirements.txt']
INFO: executing commands ['echo "which python:
WARNING: Did not create file /var/folders/wp/hyj7k6ls33g86nz45c7p0hsr0000gn/T/tmpr25wfuh5/test_resources_output0test_trace_me.py.xml while running the tests.
Numpy threw some errors on my machine, i needed to update it to v '^1.23.3'.
Also, sklearn is deprecated and I replaced it with scikit-learn v '^1.1.2'.
Scipy threw errors caused by missing blas and lapak, so i added scipy v ^1.9.1 to poetry, which fixed it.
Scipy then conflicted with the Python version, so I adjusted Python in pyproject.toml to '>=3.8,<3.12'
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.