Giter VIP home page Giter VIP logo

pytest-django-queries's Introduction

pytest-django-queries

Generate performance reports from your django database performance tests (inspired by coverage.py).

Requirement Status Coverage Status Documentation Status Version Latest Unstable on pypi

Commits since latest release Supported versions Supported implementations

Usage

Install pytest-django-queries, write your pytest tests and mark any test that should be counted or use the count_queries fixture.

Note: to use the latest development build, use pip install --pre pytest-django-queries

import pytest


@pytest.mark.count_queries
def test_query_performances():
    Model.objects.all()


# Or...
def test_another_query_performances(count_queries):
    Model.objects.all()

Each test file and/or package is considered as a category. Each test inside a "category" compose its data, see Visualising Results for more details.

You will find the full documentation here.

Recommendation when Using Fixtures

You might end up in the case where you want to add fixtures that are generating queries that you don't want to be counted in the results–or simply, you want to use the pytest-django plugin alongside of pytest-django-queries, which will generate unwanted queries in your results.

For that, you will want to put the count_queries fixture as the last fixture to execute.

But at the same time, you might want to use the the power of pytest markers, to separate the queries counting tests from other tests. In that case, you might want to do something like this to tell the marker to not automatically inject the count_queries fixture into your test:

import pytest


@pytest.mark.count_queries(autouse=False)
def test_retrieve_main_menu(fixture_making_queries, count_queries):
    pass

Notice the usage of the keyword argument autouse=False and the count_queries fixture being placed last.

Using pytest-django alongside of pytest-django-queries

We recommend you to do the following when using pytest-django:

import pytest


@pytest.mark.django_db
@pytest.mark.count_queries(autouse=False)
def test_retrieve_main_menu(any_fixture, other_fixture, count_queries):
    pass

Integrating with GitHub

TBA.

Testing Locally

Simply install pytest-django-queries through pip and run your tests using pytest. A report should have been generated in your current working directory in a file called with .pytest-queries.

Note: to override the save path, pass the --django-db-bench PATH option to pytest.

Visualising Results

You can generate a table from the tests results by using the show command:

django-queries show

You will get something like this to represent the results:

+---------+--------------------------------------+
| Module  |          Tests                       |
+---------+--------------------------------------+
| module1 | +-----------+---------+------------+ |
|         | | Test Name | Queries | Duplicated | |
|         | +-----------+---------+------------+ |
|         | |   test1   |    0    |     0      | |
|         | +-----------+---------+------------+ |
|         | |   test2   |    1    |     0      | |
|         | +-----------+---------+------------+ |
+---------+--------------------------------------+
| module2 | +-----------+---------+------------+ |
|         | | Test Name | Queries | Duplicated | |
|         | +-----------+---------+------------+ |
|         | |   test1   |   123   |     0      | |
|         | +-----------+---------+------------+ |
+---------+--------------------------------------+

Exporting the Results (HTML)

For a nicer presentation, use the html command, to export the results as HTML.

django-queries html

It will generate something like this.

Comparing Results

You can run django-queries backup (can take a path, optionally) after running your tests then rerun them. After that, you can run django-queries diff to generate results looking like this:

screenshot

Development

First of all, clone the project locally. Then, install it using the below command.

./setup.py develop

After that, you need to install the development and testing requirements. For that, run the below command.

pip install -e .[test]

pytest-django-queries's People

Contributors

nyankiyoshi avatar zachvalenta avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

pytest-django-queries's Issues

Generate a more compact table

Is your feature request related to a problem? Please describe.
It seems like show is hard to read as it has too many things on it.

Describe the solution you'd like
We could take example on coverage.py

Name                      Stmts   Miss  Cover   Missing
-------------------------------------------------------
my_program.py                20      4    80%   33-35, 39
my_other_module.py           56      6    89%   17-23
-------------------------------------------------------
TOTAL                        76     10    87%

Describe alternatives you've considered
The HTML export, but it doesn't sum things.

Additional context
None.

Add an option for counting queries on all tests

Is your feature request related to a problem? Please describe.
Some users might want to count queries on all their tests without implementing new test cases for checks. So we could include an option for that.

Describe the solution you'd like
We could put a option --queries-count-all or such.

Describe alternatives you've considered
None.

Additional context
None.

The user should be able to customize the diff table labels from the environment

Is your feature request related to a problem? Please describe.
Users might get confused by the left and right label, because those labels might make sense for some users but not to all of them. What is left? What is right?

Describe the solution you'd like
As it might be considered as a major change to actually rename those fields, it would be just easier to let the user decide what they want as label using environment variables.

DJ_QUERIES_LEFT_LABEL = "before"
DJ_QUERIES_RIGHT_LABEL = "after"

Extend Utilities and Testing to Perform Snapshots

Is your feature request related to a problem? Please describe.
We would like to be able to record a bench session to generate snapshot files to be able to compare diffs and analyze SQL queries and query count.

Describe the solution you'd like
Generate a snapshot per test per module. This would be in a YAML format in order to make it more readable (human-friendly) for pull request reviewers but also for developers locally.

We would also include EXPLAIN queries, as part of #41.

We would include:

  • The SQL query without the values – allows to detect issues and huge amount of parameters
  • The query count in the root (top)

Which would look something like:

count: 12
duplicates: 8
queries:
  - sql: >
      SELECT foo FROM bar WHERE id IN (?, ?, ?, ?, ?, ?)

Implement the diff tool

This will compare a given file with the latest one. As follows.

django-queries diff ...files

With 1 <= len(files) <= 2

pytest-xdist is not fully supported

Describe the bug
Only one worker is actually saving the data when running multiple workers through pytest-xdist.

To Reproduce
Steps to reproduce the behavior:

  1. Install pytest-xdist
  2. Use at least two workers (-n 2)
  3. The results file should contains half or less of the results.

Expected behavior
To have all the results.

Additional context
pytest_sessionstart/finish is ran on each worker, not on the whole pytest instance.
We will need to have the plugin registered into pytest to actually make it work properly over the workers.

Use black, flake8 and pre-commit

We need some kind of standard style in the future. We will use and require the black style and impose the flake8 checks.

In addition, we will provide a pre-commit configuration, an installation guide and a usage guide.

Add execution time with threshold in diff

Describe the solution you'd like
To be able to see if a test improves or get worst in execution time, e.g. missing indexes.

Describe alternatives you've considered
There is pytest-benchmark.

Release Plans

0.1.0

  • The plugin is now able to support multiple pytest sessions without conflicting (#1)
  • The plugin non-longer benchmarks everything but instead, the user is charged to manually flag each test as to be or not to be benchmarked (#1).

1.0.0.dev1 (12 May 2019)

  • Add the cli with the show and html options (#3)

1.0.0a1 (13 May 2019)

  • Add option to customize the output file, e.g. --queries-report=path (#10 -> #13)

1.0.0a2 (16 May 2019)

  • Freeze weak requirements that may break the tests
  • Add RTD documentation (#7)
  • Add changelog file and entries + a release guide (#6)
  • Fix #15
  • Move the HTML template into a single file (#18)

1.0.0b1 (19 May 2019)

  • Raw Diff tool (#22)
    • Term Colors (#22)

1.0.0rc1 (27 May 2019)

  • Switch to black (#8)
  • Enforce flake8 (#8)
  • Add possibility to backup results (#24)
  • Add possibility to save HTML without being dependent on any platform (#27)

1.0.0rc2 (5 June 2019)

  • Add help text for parameters in cli (#28)
  • Add backup command in cli to make it faster to backup (#29)
  • Remove test prefix from dotted import strings (#34)
  • Add note to documentation on how to run tests separately (#23)
  • Fix issues with fixtures and django plugins (#35)

1.0.0rc3 (6 June 2019)

  • Add support for pytest-xdist (multi-workers tests - #36)

1.0.0 (7 June 2019)

Should be release as rc3 is. If everything goes correctly.

1.1.0 (TBA)

  • Change in the HTML template: generate sums of queries (#12)
  • Generate a more compact table (#11) - as a export command (aliased to show --compact).

Run EXPLAIN Against Queries

Is your feature request related to a problem? Please describe.
We would like to be able to quickly analyze and detect issues against queries. Whether a missing index, optimize query, etc.

Describe the solution you'd like
A possibility would be to intercept QuerySets or SQL queries using django's connection wrapper feature. We would run EXPLAIN and save/append into a file or snapshot.

Possible issue to investigate: MySQL seems to trigger an infinite recursion when doing this–make sure it works on all RDBMS through travis-ci.

We also need to make sure we are not adding overhead or running queries multiple times–especially writes and we also allow return of values, e.g.:

UPDATE foo SET bar = 1 RETURNING id;

pytest-django helpers are making queries in background when using markers

Describe the bug
If we do something like this:

@pytest.mark.django_db
@pytest.mark.count_queries
def test_get_payment_token(api_client):
    pass

It will generate queries from the django_db fixture.

The solution is:

def test_get_payment_token(db, api_client, count_queries):
    pass

But then we lose the ability to filter tests by markers.

Add trailing whitespace to JSON file

Is your feature request related to a problem? Please describe.
I run a standard pre-commit lint which checks for a trailing newline in all files. When the .pytest-queries file is updated the trailing newline is stripped and must be manually added afterwards

Describe the solution you'd like
Appending a trailing newline to the output

Describe alternatives you've considered
Files can be excluded from the lint, but I'd prefer to avoid special-casing where possible

Additional context
The lint which flags this file is recommended by the pre-commit project, so will probably affect a fair number of users

Add backup command in cli to make it faster to backup

Will just act as #24 but would be way faster for the user to actually backup stuff than writing pytest --django-queries-backup which is way too long but required for pytest to avoid conflicts. It's more IDE friendly as well which would be way better (e.g. users can do quick runs).

Add RTD documentation

  • A single page
  • A more comprehensive and deeper guide than the README file
  • Will contain notes on how to contribute

Usage

...

Visualizing

Raw tables

HTML tables

Customize the template (advanced)

Development

...

Generate sums of queries

Is your feature request related to a problem? Please describe.
In the HTML results, we could use a row to sum the results per section.

Describe alternatives you've considered
None.

Additional context
None.

Add test coverage feature

Is your feature request related to a problem? Please describe.
Users might want to know what is there coverage over the queries.

Describe the solution you'd like
To have a coverage rapport over all the executed code.

Describe alternatives you've considered
We could use coverage.py and only include the marked tests.

Add releasing guide

  • Add the list of files to update before releasing
  • Add how to tag
  • Add how to publish release notesź

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.