Giter VIP home page Giter VIP logo

pandas-vet's Introduction

pandas-vet

pandas-vet is a plugin for flake8 that provides opinionated linting for pandas code.

Documentation Status

Test and lint Code style: black PyPI - License

PyPI PyPI - Status PyPI - Downloads

Conda Version Conda Downloads

Basic usage

Take the following script, drop_column.py, which contains valid pandas code:

# drop_column.py
import pandas

df = pandas.DataFrame({
    'col_a': [i for i in range(20)],
    'col_b': [j for j in range(20, 40)]
})
df.drop(columns='col_b', inplace=True)

With pandas-vet installed, if we run Flake8 on this script, we will see three warnings raised.

$ flake8 drop_column.py

./drop_column.py:2:1: PD001 pandas should always be imported as 'import pandas as pd'
./drop_column.py:4:1: PD901 'df' is a bad variable name. Be kinder to your future self.
./drop_column.py:7:1: PD002 'inplace = True' should be avoided; it has inconsistent behavior

We can use these to improve the code.

# pandastic_drop_column.py
import pandas as pd

ab_dataset = pd.DataFrame({
    'col_a': [i for i in range(20)],
    'col_b': [j for j in range(20, 40)]
})
a_dataset = ab_dataset.drop(columns='col_b')

For a full list, see the Supported warnings page of the documentation.

Motivation

Starting with pandas can be daunting. The usual internet help sites are littered with different ways to do the same thing and some features that the pandas docs themselves discourage live on in the API. pandas-vet is (hopefully) a way to help make pandas a little more friendly for newcomers by taking some opinionated stances about pandas best practices. It is designed to help users reduce the pandas universe.

The idea to create a linter was sparked by Ania Kapuścińska's talk at PyCascades 2019, "Lint your code responsibly!". The package was largely developed at the PyCascades 2019 sprints.

Many of the opinions stem from Ted Petrou's excellent Minimally Sufficient Pandas. Other ideas are drawn from pandas docs or elsewhere. The Pandas in Black and White flashcards have a lot of the same opinions too.

pandas-vet's People

Contributors

alysivji avatar clarkearl avatar deppen8 avatar jamesmyatt avatar kplauritzen avatar notsambeck avatar pwoolvett avatar rhornberger avatar sigmavirus24 avatar simchuck avatar tdsmith avatar wadells avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

pandas-vet's Issues

New approach to docs with JupyterBook

I have recently started adopting JupyterBook for documenting projects and I love it. It is great for adding the things that I would like to add to pandas-vet like tutorials and mixing Markdown and reST Sphinx docs.

It also has nice pre-defined GitHub Actions for building the docs and putting them in a branch for GitHub Pages.

This would replace issue #70 and #87

Check for .stack

Check for .stack method. See flashcard. Give error message:

Prefer '.melt' to '.stack'. '.melt' allows direct column renaming and avoids a MultiIndex

Import pandas as pd

Check for import pandas as pd pattern.

If only import pandas, give error message:

Use import pandas as pd convention

Add docstrings to everything

Every class, function, etc. should have a docstring that at least describes functionality, but many are currently lacking.

Check for .ix

Check for deprecated .ix. Give error message:

'.ix' is deprecated. Use '.loc' or '.iloc' instead.

Check for `df` variable names

As raised in #64, variables named df are generally bad practice. However, they are used so widely that it would be pretty disruptive to introduce this as a normal check. It should be the first of the off-by-default checks (#71).

Refactor tests

All of our tests are currently in test_PD001.py. We need to either change the name of that file to something more general or breakout our tests into separate files corresponding to the error codes. I prefer the latter. So for each error code, we should have a different test_PD<code>.py file.

Dockerize the development environment

Is your feature request related to a problem? Please describe.
I want to make it easier for new contributors to get going.

Describe the solution you'd like
Create a Dockerfile that installs the necessary files and dependencies for developing and testing the code. Modern IDEs like VSCode and PyCharm allow you to connect to a running Docker container which makes it a breeze to work on a container.

inplace set to a variable raises exception

Describe the bug
raised exception

To Reproduce
Steps to reproduce the behavior:

Have the following code in a file

def some_function(dataFrame, in_place=False):
    return dataFrame.drop([], inplace=in_place)

Expected behavior
Allow flake8 to report violations and not throw exceptions.

Screenshots
Screen Shot 2021-09-11 at 11 42 54 AM

Additional context

bash-5.1# cat /usr/lib/python3.9/site-packages/pandas_vet/version.py 
__version__ = "0.2.2"

This is running on a docker container based on alpine:3.14.1. Same results obtained on a mac.

Things work if we do not provide a variable:

def some_function(dataFrame, in_place=False):
    return dataFrame.drop([], inplace=False)

Find non-passing pandas scripts in the wild

It would be cool to have examples of real pandas scripts to feed to the tests. This might help us track down bugs. For example, I am anticipating we might catch false positives for some places where pandas overlaps numpy

Documentation site

I would like to expand our docs beyond just the README file. A simple Read-the-Docs site or GitHub Page would be good. It should include pages for

  • README equivalent
  • Code of Conduct
  • Contributor Guide
  • API docs
  • example usage

Conda and pip install (0.2.2) not showing plugins with flake8 --version; 0.2.1 works

Installing pandas-vet with pip or conda on Windows or WSL results in flake8 --version returning

3.7.9 () CPython 3.7.3 on Windows
3.7.9 () CPython 3.8.2 on Linux

Expected behavior

  1. Make a new env
  2. pip install flake8 pandas-vet==0.2.2
flake8 --version
3.7.9 (flake8-pandas-vet: 0.2.1, mccabe: 0.6.1, pycodestyle: 2.5.0, pyflakes: 2.1.1) CPython 3.7.3 on Windows

To Reproduce
Steps to reproduce the behavior:

  1. Make a new env
  2. pip install flake8 pandas-vet or pip install flake8 pandas-vet==0.2.2
  3. flake8 --version
  4. See error

Desktop Environment:

  • OS: Microsoft Windows [Version 10.0.18363.720]
  • Python versions tested: 3.7.3, 3.8.2
  • pandas-vet version: 0.2.2

Additional Context
Running flake8 -v --version results in:

flake8.plugins.manager    MainProcess     65 INFO     Loading entry-points for "flake8.extension".
flake8.plugins.manager    MainProcess     73 INFO     Loading entry-points for "flake8.report".
flake8.plugins.manager    MainProcess     80 INFO     Loading plugin "F" from entry-point.
flake8.plugins.manager    MainProcess    118 INFO     Loading plugin "pycodestyle.ambiguous_identifier" from entry-point
.
flake8.plugins.manager    MainProcess    123 INFO     Loading plugin "pycodestyle.bare_except" from entry-point.
flake8.plugins.manager    MainProcess    123 INFO     Loading plugin "pycodestyle.blank_lines" from entry-point.
flake8.plugins.manager    MainProcess    123 INFO     Loading plugin "pycodestyle.break_after_binary_operator" from entr
y-point.
flake8.plugins.manager    MainProcess    123 INFO     Loading plugin "pycodestyle.break_before_binary_operator" from ent
ry-point.
flake8.plugins.manager    MainProcess    123 INFO     Loading plugin "pycodestyle.comparison_negative" from entry-point.
flake8.plugins.manager    MainProcess    124 INFO     Loading plugin "pycodestyle.comparison_to_singleton" from entry-po
int.
flake8.plugins.manager    MainProcess    124 INFO     Loading plugin "pycodestyle.comparison_type" from entry-point.
flake8.plugins.manager    MainProcess    124 INFO     Loading plugin "pycodestyle.compound_statements" from entry-point.
flake8.plugins.manager    MainProcess    124 INFO     Loading plugin "pycodestyle.continued_indentation" from entry-poin
t.
flake8.plugins.manager    MainProcess    124 INFO     Loading plugin "pycodestyle.explicit_line_join" from entry-point.
flake8.plugins.manager    MainProcess    124 INFO     Loading plugin "pycodestyle.extraneous_whitespace" from entry-poin
t.
flake8.plugins.manager    MainProcess    125 INFO     Loading plugin "pycodestyle.imports_on_separate_lines" from entry-
point.
flake8.plugins.manager    MainProcess    125 INFO     Loading plugin "pycodestyle.indentation" from entry-point.
flake8.plugins.manager    MainProcess    125 INFO     Loading plugin "pycodestyle.maximum_doc_length" from entry-point.
flake8.plugins.manager    MainProcess    125 INFO     Loading plugin "pycodestyle.maximum_line_length" from entry-point.
flake8.plugins.manager    MainProcess    125 INFO     Loading plugin "pycodestyle.missing_whitespace" from entry-point.
flake8.plugins.manager    MainProcess    126 INFO     Loading plugin "pycodestyle.missing_whitespace_after_import_keywor
d" from entry-point.
flake8.plugins.manager    MainProcess    126 INFO     Loading plugin "pycodestyle.missing_whitespace_around_operator" fr
om entry-point.
flake8.plugins.manager    MainProcess    126 INFO     Loading plugin "pycodestyle.module_imports_on_top_of_file" from en
try-point.
flake8.plugins.manager    MainProcess    126 INFO     Loading plugin "pycodestyle.python_3000_async_await_keywords" from
 entry-point.
flake8.plugins.manager    MainProcess    126 INFO     Loading plugin "pycodestyle.python_3000_backticks" from entry-poin
t.
flake8.plugins.manager    MainProcess    127 INFO     Loading plugin "pycodestyle.python_3000_has_key" from entry-point.
flake8.plugins.manager    MainProcess    127 INFO     Loading plugin "pycodestyle.python_3000_invalid_escape_sequence" f
rom entry-point.
flake8.plugins.manager    MainProcess    127 INFO     Loading plugin "pycodestyle.python_3000_not_equal" from entry-poin
t.
flake8.plugins.manager    MainProcess    127 INFO     Loading plugin "pycodestyle.python_3000_raise_comma" from entry-po
int.
flake8.plugins.manager    MainProcess    127 INFO     Loading plugin "pycodestyle.tabs_obsolete" from entry-point.
flake8.plugins.manager    MainProcess    128 INFO     Loading plugin "pycodestyle.tabs_or_spaces" from entry-point.
flake8.plugins.manager    MainProcess    128 INFO     Loading plugin "pycodestyle.trailing_blank_lines" from entry-point
.
flake8.plugins.manager    MainProcess    128 INFO     Loading plugin "pycodestyle.trailing_whitespace" from entry-point.
flake8.plugins.manager    MainProcess    128 INFO     Loading plugin "pycodestyle.whitespace_around_comma" from entry-po
int.
flake8.plugins.manager    MainProcess    129 INFO     Loading plugin "pycodestyle.whitespace_around_keywords" from entry
-point.
flake8.plugins.manager    MainProcess    129 INFO     Loading plugin "pycodestyle.whitespace_around_named_parameter_equa
ls" from entry-point.
flake8.plugins.manager    MainProcess    129 INFO     Loading plugin "pycodestyle.whitespace_around_operator" from entry
-point.
flake8.plugins.manager    MainProcess    129 INFO     Loading plugin "pycodestyle.whitespace_before_comment" from entry-
point.
flake8.plugins.manager    MainProcess    129 INFO     Loading plugin "pycodestyle.whitespace_before_parameters" from ent
ry-point.
flake8.plugins.manager    MainProcess    130 INFO     Loading plugin "C90" from entry-point.
flake8.plugins.manager    MainProcess    131 INFO     Loading plugin "PD" from entry-point.
flake8.plugins.manager    MainProcess    147 INFO     Loading plugin "default" from entry-point.
flake8.plugins.manager    MainProcess    151 INFO     Loading plugin "pylint" from entry-point.
flake8.plugins.manager    MainProcess    151 INFO     Loading plugin "quiet-filename" from entry-point.
flake8.plugins.manager    MainProcess    151 INFO     Loading plugin "quiet-nothing" from entry-point.

`PD012` is outdated

First off, thank you for developing this plugin!

Is your feature request related to a problem? Please describe.

read_table is no longer deprecated in favour of read_csv following a discussion. As far as I can tell from reading the references in the given motivation, this means there is no longer a good reason to recommend read_csv over read_table per se.

Describe the solution you'd like

The rule should either be categorised as opinionated to require an opt-in (like PD901), removed, or changed to only emit a warning if read_table is called with sep="," (which is what ruff has just done).

Challenging PD008: .at can be useful

Is there a good reason to prefer .loc over .at when you need to get a single value?

We use .at over .loc in our codebase when we want to signal to other developers that we intend to get a single value, as opposed to a Series or DataFrame.

The warning seems to assume the developer picked .at for speed, while their intention is more likely to have picked it for correctness and clarity.

Error: No such option "--annoy"

Describe the bug
Cannot use the --annoy flag with flake8

To Reproduce
Steps to reproduce the behavior:

  1. Create tmp.py file
  2. Install flake8 and flake8 the file
  3. Install pandas-vet
  4. flake8 again verifying that pandas-vet was loaded (expected: ``)
  5. Try again with --annoy flag. This fails with error: flake8: error: no such option: --annoy
  6. See terminal output below

tmp.py

import pandas

that = 1
this = that
print(this)

df = 1

Terminal:

(py368) C:\Users\king.kyle\>flake8 tmp.py
tmp.py:1:1: F401 'pandas' imported but unused
tmp.py:1:1: D100 Missing docstring in public module
tmp.py:5:1: T001 print found.

(py368) C:\Users\king.kyle\>python -m pip install pandas-vet
Collecting pandas-vet
  Using cached https://files.pythonhosted.org/packages/21/53/d031fd623fde85f554c73d87c431ad4cf5d929d89c1cd728ab5e4d145a52/pandas_vet-0.2.1-py3-none-any.whl
Installing collected packages: pandas-vet
Successfully installed pandas-vet-0.2.1

(py368) C:\Users\king.kyle\>flake8 tmp.py
tmp.py:1:1: F401 'pandas' imported but unused
tmp.py:1:1: D100 Missing docstring in public module
tmp.py:1:1: PD001 pandas should always be imported as 'import pandas as pd'
tmp.py:5:1: T001 print found.

(py368) C:\Users\king.kyle\>flake8 tmp.py --annoy
Usage: flake8 [options] file file ...

flake8: error: no such option: --annoy

(py368) C:\Users\king.kyle\>

Expected behavior
Expected the error PD901 'df' is a bad variable name. Be kinder to your future self.

Environment:

  • OS: Windows 10 64-bit / 1909
(py368) C:\Users\king.kyle\Developer\Packages\common_dev>python
Python 3.6.8 |Anaconda, Inc.| (default, Feb 21 2019, 18:30:04) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> exit()

(py368) C:\Users\king.kyle\Developer\Packages\common_dev>flake8 --version
3.7.9 (flake8-blind-except: 0.1.1, flake8-bugbear: 19.8.0, flake8-docstrings: 1.5.0, pydocstyle: 4.0.1, flake8-mock: 0.3, flake8-pandas-vet: 0.2.1, flake8-print: 3.1.4, flake8-tuple: 0.4.1, flake8_builtins: 1.4.1, flake8_commas: 2.0.0, flake8_deprecated: 1.2, flake8_isort: 2.3, flake8_quotes: 2.1.1, logging-format: 0.6.0, mccabe: 0.6.1, naming: 0.8.2, pycodestyle: 2.5.0, pyflakes: 2.1.1) CPython 3.6.8 on Windows

Improve contributor experience

Is your feature request related to a problem? Please describe.
I really like this project, and I want to contribute, but there are some roadblocks in the way.
Getting the project setup on my local machine had some issues:

  • Python 3.10 not supported. That is my default python, so it would be helpful if it was supported
  • Outdated dependencies. pytest and black are both pinned to old version that have issues. I suggest moving to poetry for dependecy handling instead
  • manual style checks. I'd much rather have a predefined pre-commit check to run black, flake8, pytest etc, instead of finding the exact commands in the readme before committing.

This is not meant to be one big complaint. This project is really useful to me, so thanks for building it.

Improve official docs

Need to add:

  • Basic usage guide
  • More on contributing.
  • Contributors list
  • License

Basically, everything that is currently in the README.md should be moved to the docs. The README.md should have minimal info and a basic example.

Method for off-by-default linter checks

As discussed in #64, it would be good to have some checks that can be implemented but are "off" by default. These would be the most opinionated checks that would be a bit too strict to be activated out-of-the-box.

"import re; re.sub(...)" Marked as PD005 Error

Description/Steps to Reproduce
For the below file, pandas-vet returns the error tmp.py:2:1: PD005 Use arithmetic operator instead of method. It looks like the rule is intended for pandas.DataFrame.sub but is being applied to re.sub

import re
re.sub('', '', '')

Write AST explorer tool to facilitate creation of new 'check' functions.

Developing the check functions for each linter errors requires exploration of the AST nodes to determine the corresponding attributes to be compared against the valid code patterns. This presents a barrier to quickly implementing new linter checks that could be reduced if there were a tool that returns the appropriate AST node attributes for a specified pattern.

The envisioned solution might utilize the following form:

attributes = ast_explore(code_signature)

where code_signature is a string representing the code pattern of interest, and attributes is a string (or list of strings) representing the composed attributes for the corresponding AST node. This string could then be appended to the AST node in the check functions:

if node.<attribute_string> == <test_condition>:

or

for attribute in node.<attribute_string>:

PD007, PD008, PD009 should not use node.value.attr

When flake8 runs check_for_ix, check_for_at, or check_for_iat, it raises AttributeError: 'Name' object has no attribute 'attr'

This seems to indicate that node.value returns a 'Name' object and that node.value.attr should be changed. Not sure if we can follow the pattern for check_for_isnull, etc. because it .ix[], .at[].

Add items to README

Need to add:

  • contributor names
  • links to external sites like flake8 docs
  • badges
  • note to contributors to run flake8 and pytest before submitting PR
  • contributor links

Arithmetic and comparison operators

We should implement checks for all of the text-based arithmetic and comparison operators. If found, recommend using the operator itself. Something like:

Use instead of

use check for
+ .add
- .sub and .subtract
* .mul and .multiply
/ .div, .divide and .truediv
** .pow
// .floordiv
% .mod
> .gt
< .lt
>= .ge
<= .le
== .eq
!= .ne

Initial Update

The bot created this issue to inform you that pyup.io has been set up on this repo.
Once you have closed it, the bot will open pull requests for updates as soon as they are available.

na vs. null

Check for .isnull and .notnull methods. Check them separately.

If .isnull found, give error message:

Use .isna instead of .isnull

If .notnull found, give error message:

Use .notna instead of .notnull

Checks for .at and .iat

Check for .at and .iat indexing methods. These should probably be separate checks for consistency with other cases. See flashcard. Give error messages:

Use '.loc' instead of '.at'. If speed is important, use numpy.

Use '.iloc' instead of '.iat'. If speed is important, use numpy.

Collect groupby examples

For help with #24, we need to better understand the range of groupby uses. We should collect some real-world uses of groupby to better understand whether a linting pattern will raise too many false positives.

Check for pd.merge

Check for use of pd.merge function. Preferred is .merge method. Even pandas docs use .merge for the documentation of pd.merge! See flashcard.

Give error message:

Use '.merge' method instead of 'pd.merge' function. They have equivalent functionality.

Never set `inplace = True`

Check for use of parameter inplace = True. If found, give error message:

inplace operations do not always behave as you might expect

Use generator expressions when possible

Is your feature request related to a problem? Please describe.

pd.concat([df for df in my_dfs])

works, but could also be written as

pd.concat(df for df in my_dfs)

thanks for PEP289.

Describe the solution you'd like

Flag cases when a generator expression could be used instead of a list comprehension

Describe alternatives you've considered
Giving a warning from within pandas - problem is, at runtime, one does not know whether a function has been passed a list written as a list comprehension or if it was a preformed list. So this is likely better-suited to a linter

Additional context

The trickiest part would probably be to identify all the cases when this can be done - that might mean having to parse the codebase and look for public functions where an argument is annotated with Iterable. I'm happy to raise a PR if you'd like this

False positives with common method names

This is a dedicated issue for the big discussion in #74

The problem is that many of our checks rely on the type of the object being a pandas object. This is a fundamental issue with static linting in Python because the AST doesn't know what type a thing is. This leads to false positives for things like re.sub() or dict.values()

I am open to suggestions on how to get around this, but it will likely be a big job. Some kind of integration with mypy or some other way to leverage type annotations might be a way to fix this, at least for folks who use those type annotations. What exactly that looks like is unclear to me, so please let me know if you have any ideas.

For now, the undesirable workaround is to turn off checks that are particularly bothersome.

pandas-vet should run when flake-8 invoked by pre-commit

Describe the bug
First of all thanks as this repository look awesome, what I expect to happend is that pandas-vet should run when flake-8 invoked by pre-commit and it doesnt

To Reproduce
Steps to reproduce the behavior:

  1. Install dependencies:

pip install pre-commit==2.1.1
pip install pandas-vet==0.2.2

  1. Create .pre-commit-config.yaml

repos:

  1. Install pre-commit to specific repository

pre-commit install

  1. Write code that break pandas-vet like naming dataframe df and try to commit and it will pass

Expected behavior
I would like that it will run in similar way when executed by the pre-commit hook

Desktop (please complete the following information):

  • OS: Catalina

False positive: dict().values()

Solving this false positive should be easier than solving all of them at the same time (#81), because in pandas .values is a property, not a method.

PD005 erroneously flags arithmetic methods from other packages

Describe the bug
PD005 is triggered when any module has a method named something like .add()

To Reproduce
I noticed this while using the tarfile module. The tarfile module has a method named .add()

Here is the snippet. Flake8 triggered PD005 on both of the tar.add() calls.

tar = tarfile.open(tar_filename, "w")
tar.add(other_filename)  # <-- triggers PD005
for filename in filenames:
    tar.add(filename)  # <-- triggers PD005
tar.close()

Expected behavior
I expect this to only be triggered when a method like .add() is called on a pandas object.

Additional context
I expect that this applies to the comparison operators too (PD006). Any fix for PD005 should also be made for PD006 too.

Constant column check with `nunique`

Try to respond to as many of the following as possible

Generally describe the pandas behavior that the linter should check for and why that is a problem. Links to resources, recommendations, docs appreciated

The linter should check for nunique being compared to 1. The detected pattern is less performant because it does not leverage short-circuiting when multiple unique values are found, and simply continues counting..

perf_short_circuit

def setup(n):
    return pd.Series(list(range(n)))

perf_worst

def setup(n):
    return pd.Series([1] * (n - 1) + [2])

Suggest specific syntax or pattern(s) that should trigger the linter (e.g., .iat)

  • df.column.nunique() == 1
  • df.column.nunique() != 1
  • df.column.nunique(dropna=True) == 1
  • df.column.nunique(dropna=True) != 1
  • df.column.nunique(dropna=False) == 1
  • df.column.nunique(dropna=False) != 1

Suggest specific syntax or pattern(s) that the linter should allow (e.g., .iloc)

Note that the solution is simple when there are no NaN values:

(series.values[0] == series.values).all()

And needs some additional logic when NaN/NA values are present.

For dropna=True

v = series.values
v = remove_na_arraylike(v)
if v.shape[0] == 0:
    return False
(v[0] == v).all()

For dropna=False

v = s.values
if v.shape[0] == 0:
    return False
(v[0] == v).all() or not pd.notna(v).any()

if included in pandas:

series.is_constant()

Suggest a specific error message that the linter should display (e.g., "Use '.iloc' instead of '.iat'. If speed is important, use numpy indexing")

Consider checking equality to first element instead of .nunique() == 1 for checking for a constant column.

Are you willing to try to implement this check?

  • Yes
  • No
  • Maybe, with some guidance

Existing `flake8` errors in source and tests

There are existing flake8 lint warnings in the source and tests

(.venv) > flake8 {pandas_vet,tests}
pandas_vet/__init__.py:116:80: E501 line too long (83 > 79 characters)
pandas_vet/__init__.py:134:80: E501 line too long (94 > 79 characters)
pandas_vet/__init__.py:137:80: E501 line too long (83 > 79 characters)
pandas_vet/__init__.py:155:80: E501 line too long (105 > 79 characters)
pandas_vet/__init__.py:158:80: E501 line too long (83 > 79 characters)
pandas_vet/__init__.py:174:80: E501 line too long (107 > 79 characters)
pandas_vet/__init__.py:177:80: E501 line too long (83 > 79 characters)
pandas_vet/__init__.py:232:80: E501 line too long (85 > 79 characters)
pandas_vet/__init__.py:365:80: E501 line too long (84 > 79 characters)
pandas_vet/__init__.py:369:80: E501 line too long (82 > 79 characters)
pandas_vet/__init__.py:373:80: E501 line too long (84 > 79 characters)
pandas_vet/__init__.py:383:80: E501 line too long (83 > 79 characters)
pandas_vet/__init__.py:386:80: E501 line too long (85 > 79 characters)
pandas_vet/__init__.py:389:80: E501 line too long (102 > 79 characters)
pandas_vet/__init__.py:392:80: E501 line too long (93 > 79 characters)
pandas_vet/__init__.py:395:80: E501 line too long (90 > 79 characters)
pandas_vet/__init__.py:398:80: E501 line too long (81 > 79 characters)
pandas_vet/__init__.py:401:80: E501 line too long (107 > 79 characters)
tests/test_PD015.py:56:80: E501 line too long (86 > 79 characters)

The simple fix here is to just update this code.

But it may also be a good idea to add flake8 lint check it CI, and to consider the library's interest in adopting black auto-formatting and how that might influence a flake8 config file.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.