Giter VIP home page Giter VIP logo

pytest-mpl's Introduction

pytest-mpl

pytest-mpl is a pytest plugin to facilitate image comparison for Matplotlib figures.

For each figure to test, an image is generated and then subtracted from an existing reference image. If the RMS of the residual is larger than a user-specified tolerance, the test will fail. Alternatively, the generated image can be hashed and compared to an expected value.

For more information, see the pytest-mpl documentation.

Installation

pip install pytest-mpl

For detailed instructions, see the installation guide in the pytest-mpl docs.

Usage

First, write test functions that create a figure. These image comparison tests are decorated with @pytest.mark.mpl_image_compare and return the figure for testing:

import matplotlib.pyplot as plt
import pytest

@pytest.mark.mpl_image_compare
def test_plot():
    fig, ax = plt.subplots()
    ax.plot([1, 2])
    return fig

Then, generate reference images by running the test suite with the --mpl-generate-path option:

pytest --mpl-generate-path=baseline

Then, run the test suite as usual, but pass --mpl to compare the returned figures to the reference images:

pytest --mpl

By also passing --mpl-generate-summary=html, a summary of the image comparison results will be generated in HTML format:

html all html filter html result

For more information on how to configure and use pytest-mpl, see the pytest-mpl documentation.

Contributing

pytest-mpl is a community project maintained for and by its users. There are many ways you can help!

  • Report a bug or request a feature on GitHub
  • Improve the documentation or code

pytest-mpl's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytest-mpl's Issues

Document how to make pytest-mpl optional

One of the issues we struggled with for astroplan was how to make the image comparison tests optional, i.e. have them skipped if matplotlib or nose as optional dependencies aren't installed.

@astrofrog – What do you think about the solution @eteq came up with?
(see https://github.com/astropy/astroplan/blob/master/astroplan/conftest.py)

def pytest_configure(config):
    if hasattr(astropy_pytest_plugins, 'pytest_configure'):
        # sure ought to be true right now, but always possible it will change in
        # future versions of astropy
        astropy_pytest_plugins.pytest_configure(config)

    #make sure astroplan warnings always appear so we can test when they show up
    warnings.simplefilter('always', category=AstroplanWarning)

    try:
        import matplotlib
        import nose  # needed for the matplotlib testing tools
        HAS_MATPLOTLIB_AND_NOSE = True
    except ImportError:
        HAS_MATPLOTLIB_AND_NOSE = False

    if HAS_MATPLOTLIB_AND_NOSE and config.pluginmanager.hasplugin('mpl'):
            config.option.mpl = True
            config.option.mpl_baseline_path = 'astroplan/plots/tests/baseline_images'

@astrofrog – I'd like to set up image comparison tests for other packages in the near future and have them optional. Could you please document the recommended setup?

I think it would also be good to have a short list of "packages using pytest-mpl" ... there's nothing better than working examples to get started.

Hanging when generating images if Inkscape is on path (Windows)

Ran into an interesting bug, where my pytest command was hanging when using the --mpl-generate-path option (note, this is my first time using pytest-mpl, so let me know if this is user error...)

I noticed in my task manager that Inkscape was running at full-throttle on one core as a child process of the pytest command, and that killing it would cause the image generation to complete, with the following warning:

 C:\anaconda3\envs\******\lib\importlib\_bootstrap.py:219: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__
    return f(*args, **kwds)

I'm not sure why Inkscape is being called as part of the building process, but it caused a significant lag (I let it run for about 10 minutes before I killed it, and it hadn't finished). After removing inkscape from my %PATH%, my images build, as expected, although still with the same warning as above.

Allow SVG for image comparison

With PNG output depending on both Matplotlib and Freetype version, I was wondering whether it would make sense to use the new features in Matplotlib that allow deterministic SVG files (https://matplotlib.org/users/whats_new.html#reproducible-ps-pdf-and-svg-output) to do the comparison with SVG files instead.

Of course, the idea of a 'tolerance' would be trickier to define here and I think we'd simply need to require images to match exactly, but at least the output should depend only on Matplotlib version (and the package being tested) and not on other libraries such as Freetype (if we ignore LaTeX). Using SVG files would also make it easier to keep reference image files in a repo since any diffs over time will be smaller. Of course we could keep support for PNG plots, but I'm suggesting adding a format option so that projects that want to use SVG could do that instead.

Another option would be to generate and store images as SVG files but rasterize them for the image comparison, though presumably this will again depend on the version of the library used for the rasterization.

@dopplershift @tacaswell - do you have any thoughts on this? Do you see any downsides with some packages doing the comparison in SVG space instead of with PNG files? Are there likely to be frequent changes to the SVG output without actual noticeable changes?

Just to clarify, this would assume that Matplotlib itself tests that the SVG and PNG output are consistent and that we don't have to test all formats in every package (to some extent we do this currently with packages that use pytest-mpl - we only check PNG output and assume SVG is fine)

Matplotlib deprecated remove_text

  /home/travis/virtualenv/python3.6-dev/lib/python3.6/site-packages/pytest_mpl/plugin.py:181:
 MatplotlibDeprecationWarning: The remove_text function was deprecated in version 2.1. 
Use remove_ticks_and_titles instead.
    MplImageComparisonTest.remove_text(fig)

v11.0 py3-only wheel on PyPI actually contains v12.0

The py3-only wheel on PyPI at this URL, published on 5th November 2020, seems to actually contain the v12.0 code. This breaks pinning pytest-mpl to v11.0, which is a necessary workaround for those who unfortunately need to keep Python 3.5 alive (me). See also issue #110.

Support non-PNG extensions

matplotlib's own image_comparison "decorator defaults to generating png, pdf and svg output, but in interest of keeping the size of the library from ballooning we should only include the svg or pdf outputs if the test is explicitly exercising a feature dependent on that backend." (from here).

Only supporting PNG seems very limiting, if not counter-intuitive.

Status of pytest-mpl?

Hi all,

we adopted pytest-mpl instead of maptlotlib's homegrown decorator at the time where mpl was still depending on nose: thanks for this great tool!

I also thought that the purpose of pytest-mpl was to replace the mpl testing internals in order to avoid duplicated code when MPL would made the switch to pytest, but this didn't happen. So now I wonder: what is the status/purpose of pytest-mpl and the long-term plans? Should we get back to using MPL's image_comparison decorator instead?

pytest-mpl is platform dependent?

I am experimenting with pytest-mpl in combination with bitbucket-pipelines to write unit tests for figure plotting functionality in my code. I am using a Windows 10 machine. The default docker image on bitbucket pipelines uses another platform.

I added a unit test which seems to run successfuly on my local machine. However, when I push this into the remote build it fails. I inspected the output image in the remote build using bitbucket artifacts and the images are ever so slightly different, where the text has a slightly different spacing on the remote build than on my local machine.

How can I get the images to look exactly the same such as the comparison is platform independent?

Close figure after saving

I have a number of parameterized tests that generate ~150 pngs at ~500kB per image. The test suite not only uses a lot of memory (>10GB) but it also becomes incredibly slow after ~20 images. This seems to be because the figure objects are not closed.

The matplotlib warning can be seen with --strict (note I had to register mpl_image_compare as a marker in pytest.ini):

$ pytest test/test_plotting.py -s --strict -v
... 
...
...
test/test_plotting.py::test_plot[test/data/set20.csv] /home/venv/local/lib/python2.7/site-packages/matplotlib/pyplot.py:524: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
  max_open_warning, RuntimeWarning)
PASSED
...

This fixed it: gshiba@40a0b51

Add an option for decorator to remove text

Text tends to be the most problematic when trying to reproduce images across many operating systems, etc. For some figures, text is not actually what is being tested, so it would be nice to have an option for the mpl_image_compare decorator that will automatically strip all the text from a figure before generating the baseline or comparing to the baseline.

Support yield to save a figure multiple times

I have some tests for figures which use widgets to update the figure. It would be really helpful if I could yield from the test with a figure and then update it and yield from it again or return the figure the final time.

(I am not 100% sure if that's possible with pytest)

0.6 release?

Any concrete plans for what would need to land for a 0.6 release? Some interesting things have landed since 0.5, like remove_text and style support.

Take into account matplotlib versions

As matplotlib evolves, it's inevitable that the exact rendering of the output will change. It would therefore be nice to have a way to take this into account by having support for multiple versions of baseline images if necessary.

Provide an option to return base64-encoded images in CI log

Some CI services don't have good support for artifacts. In those cases we could print out to the log the base64 expected and actual image, inside a script to generate these files. Then one can simply copy/paste the script, run it locally, and get the images. See https://github.com/WorldWideTelescope/pywwt/blob/master/pywwt/tests/test_qt_widget.py#L67 for an example of this.

This could be triggered e.g. by an environment variable or a command-line flag (--mpl-base64).

Opened this following mention of this problem in #81

Make options more consistent

Some options are available only through the decorator (e.g. the style), while some are accessible as command-line options and one (the results dir) is also an ini option. We should make this more consistent so that e.g. styles can be set globally on the command-line or in an ini file.

Take action after failure on Travis?

Hi,

thanks for pytest-mpl! I'd like to know if it's possible to upload the result images somewhere after failure on Travis. Our tests on Travis fail for an unknown reason (RMS of the order of magnitude of 8), and I'd like to have a look at the images.

AttributeError: 'mpl_image_compare' not a registered marker

When I try to use pytest-mpl on py.test version 3.4.2 I get the above error. After some googling, I realized that I could solve the problem by adding the following line as the first line in plugin.py/pytest_configure(), line 99
config.addinivalue_line('markers', "mpl_image_compare: Compares images from matplotlib")
I have not found out if there is a recent change in pytest or if something else is going on but it works for me...

Figures are not closed when pytest is run without --mpl option

I get this warning when running our graphics tests:

RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).
    max_open_warning, RuntimeWarning)

I assumed that pytest would close the figures after testing? Or is it the user's responsibility to do so?

Thanks!

v0.12 is broken with Python 3.5

The README declares that this plugin is compatible with Python 3.5, but v0.12 contains several f-string interpolations (for example plugin.py:275), which are not supported in 3.5. This results in a SyntaxError:

[... snipping most of the stack trace ...]
   File "/path/to/tox/work/py35/lib/python3.5/site-packages/pytest_mpl/plugin.py", line 275
     raise ValueError(f"The mpl summary type '{generate_summary}' is not supported.")
                                                                                   ^
 SyntaxError: invalid syntax

unrecognized arguments: --mpl-generate-path

I tried following the instructions in the README, but I get this error:

$ cat test_a.py 
import pytest
import matplotlib.pyplot as plt

@pytest.mark.mpl_image_compare
def test_succeeds():
    fig = plt.figure()
    ax = fig.add_subplot(1,1,1)
    ax.plot([1,2,3])
    return fig
$ py.test --mpl-generate-path=images test_a.py 
usage: py.test [options] [file_or_dir] [file_or_dir] [...]
py.test: error: unrecognized arguments: --mpl-generate-path=images

I did install it:

$ python -c 'import pytest_mpl'

Support check_figures_equal style comparison without any baseline_image

Hi there,

First off, thanks for the great plugin! Just a long time user over at PyGMT and we've being using pytest-mpl quite extensively to catch lots of bugs 😄

Anyhow, we're just wondering if there's scope here for something like @pytest.mark.mpl_check_equal based on matplotlib's check_figures_equal function for "generating and comparing two figures" (fig_ref and fig_test) on the fly (without having a baseline image).

The main use case for this is to keep our repository size small, since the baseline images can take up a lot of space. Currently, pytest-mpl does something similar to matplotlib's image_comparison decorator function. We've been putting some thought into this at GenericMappingTools/pygmt#555, and rather than duplicating work, we're wondering if the logic could sit here in pytest-mpl or somewhere in the matplotlib organization.

Happy to submit a PR if someone can point me in the right direction.

BUG: AxesImage.set_data([[]]) causes FloatingPointException

The following code leads to a fatal Floating Point Exception 8 error:

In [1]:  import numpy as np

In [2]: im = plt.imshow(np.random.random((128,128)))

In [3]: im.set_data([[]])

In [4]: /Users/tom/miniconda3/envs/dev/lib/python3.6/site-packages/matplotlib/image.py:327: RuntimeWarning: divide by zero encountered in double_scalars
  in_bbox.width / A.shape[1],
Floating point exception: 8

which then causes the Python session to end. This is a regression in 2.0, as it worked in 1.5.

0.10 release date

@astrofrog is there a plan for making a 0.10 release? I need the fix in #66 and it would be great to get a release so I don't have to tell people to install from github. I'd be more than happy to help make it happen :)

Remove dependency on nose

We currently depend on functions in matplotlib that depend on nose. It would be nice to avoid that dependency since it means having two testing frameworks installed.

Slash "/" in test name

My code (simplified)

@pytest.fixture(params=['/some/data1', '/some/data2'])
def dataframe(reqeust):
    return transform(request.param)

@pytest.mark.mpl_image_compare(baseline_dir=BASELINE_DIR)
def test_plot(dataframe):
    fig = my_plotter(dataframe)
    return fig

When I run pytest test_my_plotters.py --mpl --mpl-generate-path=baseline, I get something like (simplified):

IOError: [Errno 2] No such file or directory: '.../baseline/test_plot_/some/data1.png'

../venv/local/lib/python2.7/site-packages/matplotlib/backends/backend_agg.py:532: IOError

In short, print_png in matplotlib/backends/backend_agg.py is failing because the intermediate paths (.../some/...) is missing.

This work around seems to work (gshiba@3c8aa6e) but not sure if this is the right approach.

Thanks

Make baseline folder configurable?

At the moment I think the reference images have to be in a folder called baseline.

A common use case could be that people want to store the baseline images outside their code repo to keep it small.

Can you make this configurable somehow?

(a small comment about the docs: at first I was confused from the example in the readme where you have py.test --mpl-generate-path=images ... and then say the generated images have to be in baseline ... why not generate them in baseline from the start and check them there? Maybe this can be explained a little bit more clearly by changing to baseline directly or to explicitly mention that the files have to be copied over from images to baseline?)

remove_text=True doesn't remove axis labels

This is because remove_ticks_and_titles from matplotlib doesn't remove them. The question is, should ours? This would break any tests relying on the current behavior. However, I think the current behavior is really confusing.

Testing local changes in rcParams

I want to test a function which modifies rcParams dynamically depending on the user input. For example

import matplotlib.pyplot as plt

def set_lw(lw):
    plt.rcParams['lines.linewidth'] = lw

For static changes, this could be solved by generating a mplstyle and setting MPLCONFIGDIR. In my case I want to set, e.g. the cycler dynamically from cmaps, which would result in several thousands different mplstyles. Hence, it is not the way to go.

I would be thankful for any hint or suggestion how to test this kind of function.

1.4 pollutes output

When running on matplotlib 1.4 with the classic.mplstyle file, the output (either when a test fails or when running pytest -s) is filled with messages like:

Bad key "axes.spines.right" on line 219 in
/Users/rmay/miniconda3/envs/legacy/lib/python2.7/site-packages/pytest_mpl/classic.mplstyle.
You probably need to get an updated matplotlibrc file from
http://matplotlib.sf.net/_static/matplotlibrc or from the matplotlib source
distribution

Given that this file is only used on matplotlib 1.4, would it make sense to remove all of the lines matplotlib 1.4 doesn't understand? I'm guessing most of these are new rcParams for 1.5/2.0.

pytest-mpl can't be used with --open-files option

This was one of the issues I ran into when implementing image comparison tests for astroplan (astropy/astroplan#83 (comment)), but I wanted to make an issue for it here so that it's not forgotten.

See https://travis-ci.org/astropy/astroplan/jobs/77905171#L725

 AssertionError: File(s) not closed:
/home/travis/miniconda/envs/test/lib/python2.7/site-packages/matplotlib/mpl-data/fonts/ttf/Vera.ttf
/home/travis/miniconda/envs/test/lib/python2.7/site-packages/astropy/tests/pytest_plugins.py:460: AssertionError

So apparently matplotlib leaves some font files open and thus pytest-mpl can't be used together with the Astropy --open-files option.
(For now I've removed --open-files from the Astroplan tests as a temp workaround.)

@mdboom or @astrofrog – Is this something that can be fixed in matplotlib or pytest-mpl or just a fact of life?

Let image baseline directory be relative to where the tests are, not where pytest was run

If I have a directory structure like

tests
    component1
         baseline
    component2
          baseline

I run pytest in the component1 or component2 directories, things work fine, But if I run pytest in the tests directory, all tests fails because it is assumed that the baseline directory is relative to where pytest was run. I'd like to be able to run pytest in either any of these directories (tests, tests/component1, tests/component2).

Non-matplotlib figure objects need more than savefig

I have been using pytest-mpl to test the gmt-python library. We have a Figure object that implements savefig and the tests were working fine in Python 3.5 and 3.6 (an example test script).

A few weeks ago, our tests for 3.5 started failing (example error log). I reproduced the failing tests locally using a 3.5 version of our default conda environment. I managed to track it down to plt.close not recognizing my Figure class as a valid figure object (see the pyplot.close code).

The real mystery is that the same test with the same matplotlib version (2.0.2) passes on 3.6.

Is there any way that pytest-mpl could avoid the call to plt.close(fig)? Even just removing the fig argument to close the current figure would already solve the problem. I can resolve this by introducing a int attribute on my Figure but that is very ugly.

If there is not way around it, then it would be good to include in the docs the need to implement an int attribute to fool plt.close.

Otherwise, I'm very impressed that this worked so flawlessly! Congratulations!

Forcing preference order for decorators (to make parametrize() work for mpl_image_compare())

In photutils we have tests that use @pytest.mark.parametrize() to loop through the different aperture classes. However using @pytest.mark.mpl_image_compare() in this framework doesn't work as it would be ideal as the first decorator doesn't seem to paramtrize the arguments of the latter.

This is what I tried to do:

@pytest.mark.skipif('not HAS_MATPLOTLIB')
@pytest.mark.parametrize(('aperture_class', 'params'), TEST_APERTURES)
@pytest.mark.mpl_image_compare(filename='{0}.png'.format(aperture_class), tolerance=1.5)
def test_aperture_plots(aperture_class, params):

and ended up with a NameError:

==================================== ERRORS ====================================
_________ ERROR collecting photutils/tests/test_aperture_photometry.py _________
photutils/tests/test_aperture_photometry.py:66: in <module>
>   @pytest.mark.mpl_image_compare(filename='{0}.png'.format(aperture_class), tolerance=1.5)
E       NameError: name 'aperture_class' is not defined

@cdeil mentioned in astropy/astroplan#86 (comment) that it may be possible to configure this in pytest, however I haven't had the time to check it out.

Support hierarchical tests

Hi,

I am working a lot with images and therefore love the idea of the project. I played around with it a little bit and noticed something.

When I group my tests in classes, the image file gets named only after the function that calls it.

Example:

class Test_Imageutils:
    @pytest.mark.mpl_image_compare
    def test_put_text(self):
        from skimage import data
        astronaut = data.astronaut()
        annotated = imageutils.put_text(astronaut.copy(), "hi", loc="center")
        fig, ax = plt.subplots(1, 1, figsize=(6, 6))
        ax.imshow(annotated)
        return fig

And the image in baseline is named test_put_text.png

I think it would be a great idea to add an option to write the full hierarchy, either into folders or into the file name. For example

  • baseline/Test_Imageutils/test_put_text.png
  • baseline/Test_Imageutils-test_put_text.png

What do you think?

I could dig a little into the code and come up with a PR if anyone thinks it is a good idea.

Colormap not preserved in scatter plot when generating baseline plot.

Hello, when we are trying to create a baseline for a unit test that uses a scatter plot, we have noticed that pytest-mpl has not been preserving the colormap used in our scatter plot.

For example, the plot we wish to compare against is this:
myfig

When we run the unittest through simply doing an import and showing the resulting figure we get the figure above. However, whenever we use pytest to generate the baseline figure, we get the below figure:
test_time_height_scatter

Main test:

@pytest.mark.mpl_image_compare(tolerance=30)
def test_time_height_scatter():
    sonde_ds = arm.read_netcdf(
        sample_files.EXAMPLE_SONDE1)

    display = TimeSeriesDisplay({'sgpsondewnpnC1.b1': sonde_ds},
                                figsize=(7, 3))
    display.time_height_scatter('tdry', day_night_background=True)
    sonde_ds.close()

    return display.fig

time_height_scatter routine:

    def time_height_scatter(
            self, data_field=None, dsname=None, cmap='rainbow',
            alt_label=None, alt_field='alt', cb_label=None, **kwargs):
        """
        Create a time series plot of altitued and data varible with
        color also indicating value with a color bar. The Color bar is
        positioned to serve both as the indicator of the color intensity
        and the second y-axis.

        Parameters
        ----------
        data_field: str
            Name of data field in the object to plot on second y-axis
        height_field: str
            Name of height field in the object to plot on first y-axis.
        dsname: str or None
            The name of the datastream to plot
        cmap: str
            Colorbar corlor map to use.
        alt_label: str
            Altitued first y-axis label to use. If not set will try to use
            long_name and units.
        alt_field: str
            Label for field in the object to plot on first y-axis.
        cb_label: str
            Colorbar label to use. If not set will try to use
            long_name and units.
        **kwargs: keyword arguments
            Any other keyword arguments that will be passed
            into TimeSeriesDisplay.plot module when the figure
            is made.
        """
        if dsname is None and len(self._arm.keys()) > 1:
            raise ValueError(("You must choose a datastream when there are 2 "
                              "or more datasets in the TimeSeriesDisplay "
                              "object."))
        elif dsname is None:
            dsname = list(self._arm.keys())[0]

        # Get data and dimensions
        data = self._arm[dsname][data_field]
        altitude = self._arm[dsname][alt_field]
        dim = list(self._arm[dsname][data_field].dims)
        xdata = self._arm[dsname][dim[0]]

        if alt_label is None:
            try:
                alt_label = (altitude.attrs['long_name'] +
                             ''.join([' (', altitude.attrs['units'], ')']))
            except KeyError:
                alt_label = alt_field

        if cb_label is None:
            try:
                cb_label = (data.attrs['long_name'] +
                            ''.join([' (', data.attrs['units'], ')']))
            except KeyError:
                cb_label = data_field

        colorbar_map = plt.cm.get_cmap(cmap)
        self.fig.subplots_adjust(left=0.1, right=0.86,
                                 bottom=0.16, top=0.91)
        ax1 = self.plot(alt_field, color='black', **kwargs)
        ax1.set_ylabel(alt_label)
        ax2 = ax1.twinx()
        sc = ax2.scatter(xdata.values, data.values, c=data.values,
                         marker='.', cmap=colorbar_map)
        cbaxes = self.fig.add_axes(
            [self.fig.subplotpars.right + 0.02, self.fig.subplotpars.bottom,
             0.02, self.fig.subplotpars.top - self.fig.subplotpars.bottom])
        cbar = plt.colorbar(sc, cax=cbaxes)
        ax2.set_ylim(cbar.get_clim())
        cbar.ax.set_ylabel(cb_label)
        ax2.set_yticklabels([])

        return self.axes[0]

Any help you could provide on this would be appreciated.

file-failed-diff.png created when compare_images returns None

I am running some regression tests, using only compare_images()
My code looks something like this:

from matplotlib.testing.compare import compare_images

# ... generate a .png file here
# ... compare generated file with reference file from before code changes:

result = compare_images(reference_image_file_name, test_image_file_name, tol=11.0)
if result is not None:
    print('result=',result)
assert result is None

Normally this works fine. If result is not None, then in the same location as my testimg.png file I find also a testimg-failed-diff.png file showing which pixels differ with the reference image.

The problem is I have found a case where result is None as expected, yet compare_images is still generating a testimg-failed-diff.png file.

Is this normal?
In the past I've only see a failed-diff.png file when result is not None.

(it it helps, my actual pytest code is here)

Large diffs from font rendering

Hello, thanks for the module.

I know that you are aware of the issues but I'm trying to understand what I can do about them. In particular I see a large difference between two images that are similar, with some apparent minor difference in the font causing the failure.

I've inspected the generated images and they are the same (physical) size, dpi, etc.

I see some discussion (e.g. astropy/astropy#7150) about maybe just choosing the font appropriately to minimize this. Is that the current recommended solution?

I also gave the perceptual hash (#21) a go and it does generate identical hashes for the two images, which is nice. However I haven't yet had time to look into that more to really understand what it is doing so don't necessarily want to rely on it.

Baseline:
baseline-test_plot_dither

Generated. This is missing the title as remove_text=True is enabled (#68 would be relevant here):
test_plot_dither

Diff:
test_plot_dither-failed-diff

This is coming from panoptes/POCS#735

(Note that even getting these images out of travis was a huge pain as travis doesn't allow artifacts to be uploaded on pull-requests...but that's a different issue).

Set Agg backend?

Does it make sense for pytest-mpl to set the backend to "Agg" in there somewhere? I don't see a usecase where it doesn't make sense, but that doesn't mean I'm not missing something. I just want to make my tests able to run headless a bit more easily.

Margins removed in baseline

When I run or save a modified README code in Jupyter, I get a different image than the generated baseline.

Example code run in a Jupyter cell:

#%%file test.py
%matplotlib inline

import matplotlib.pyplot as plt
import pytest

@pytest.mark.mpl_image_compare
def test_succeeds():
    fig = plt.figure()
    ax = fig.add_subplot(1,1,1)
    ax.plot([1,2,3])
    fig.savefig("test")
    return fig

Running test_succeeds(); in Jupyter gives resulting image and saved test.png with margins:

test

Next, I generate a test.py file by uncommenting the first line, commenting the second line
and then running the cell

%%file test.py
#%matplotlib inline

...

From a command prompt, in the appropriate directory I run the following to generate the baseline image:

> py.test --mpl-generate-path=baseline test.py

In the baseline folder, a comparable image called test_succeeds.png is created without margins:

test_succeeds

Aside from the difference in sizes, I notice the baseline image uses the classic mpl style. Perhaps this is the cause for the absence in margins. I report this as an issue as it is ideal that when testing the baseline image, it should be the same image as shown and saved by Jupyter.

2017-04-18 16_25_19-scratchpad 5

I tested this with Anaconda 4.2.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.