Giter VIP home page Giter VIP logo

jdiff's People

Contributors

chadell avatar dependabot[bot] avatar dwight525 avatar itdependsnetworks avatar kircheneer avatar lvrfrc87 avatar progala avatar pszulczewski avatar scetron avatar ubajze avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jdiff's Issues

Add `all_same` argument for operator check.

Based on the discussion on #85, would be great to give more granular control to the operator check give an argument such as all_items where we want check if all the items in data are matching the operator logic.

Parameter Match does not return correct result

>>> post_data = {
...       "jsonrpc": "2.0",
...       "id": "EapiExplorer-1",
...       "result": [
...         {
...           "interfaces": {
...             "Management1": {
...               "lastStatusChangeTimestamp": 1626247821.123456,
...               "lanes": 0,
...               "name": "Management1",
...               "interfaceStatus": "down",
...               "autoNegotiate": "success",
...               "burnedInAddress": "08:00:27:e6:b2:f8",
...               "loopbackMode": "loopbackNone",
...               "interfaceStatistics": {
...                 "inBitsRate": 3403.4362520883615,
...                 "inPktsRate": 3.7424095978179257,
...                 "outBitsRate": 16249.69114419833,
...                 "updateInterval": 300,
...                 "outPktsRate": 2.1111866059750692
...               }
...             }
...           }
...         }
...       ]
...     }
>>> 
>>> my_check = CheckType.init(check_type="parameter_match")
>>> my_jmspath = "result[*].interfaces.*.[$name$,interfaceStatus,autoNegotiate]"
>>> post_value = my_check.get_value(output=pre_data, path=my_jmspath)
>>> my_parameter_match = {"mode": "match", "params": {"interfaceStatus": "connected", "autoNegotiate": "success"}}
>>> actual_results = my_check.evaluate(post_value, **my_parameter_match)
>>> actual_result
({}, True)
>>> post_value
[{'Management1': {'interfaceStatus': 'down', 'autoNegotiate': 'success'}}]
>>> my_parameter_match = {"mode": "no-match", "params": {"interfaceStatus": "connected", "autoNegotiate": "success"}}
>>> actual_results = my_check.evaluate(post_value, **my_parameter_match)
>>> actual_result
({}, True)

Single value not supported in expression

In case an user wants to get extract just one element from the json object, he has to pass it in a list anyway

"global.$peers$.*.is_enabled" won't work
returns: ({'index_element[3]': {'new_value': False, 'old_value': True}}, False)

"global.$peers$.*.[is_enabled]" will work
returns: ({'7.7.7.7': {'is_enabled': {'new_value': False, 'old_value': True}}}, False)

We can either fix or document as requirement

Review test_type_check.py

@chadell As part of code refactoring, I am going through all the tests included for the library. I believe there is partial overlapping between test_type_check.py and test_diff_generator.py. Whenever you have chance, could you please have a look?

Housekeeping test mock files

I believe we can work with one file only for mock data. Would be great if we can cleanup all the redundant files as well as give some naming convention

Create test for Check Type Assertion

As we have some assertion within the CheckType Class, we would need to add tests in order to evaluate that such assertion are raising the error as expected. For example: regex implementation would need to test all the below

        try:
            parameter = value_to_compare[1]
        except IndexError as error:
            raise IndexError(
                f"Evaluating parameter must be defined as dict at index 1. You have: {value_to_compare}"
            ) from error

        # Assert that check parameters are at index 1.
        if not all([isinstance(parameter, dict)]):
            raise TypeError("check_option must be of type dict().")

        # Assert that check option has 'regex' and 'mode' dict keys.
        if not "regex" in parameter and "mode" in parameter:
            raise KeyError(
                "Regex check-type requires check-option. Example: dict(regex='.*UNDERLAY.*', mode='no-match')."
            )

        # Assert that check option has 'regex' and 'mode' dict keys.
        if parameter["mode"] not in ["match", "no-match"]:
            raise ValueError(
                "Regex check-type requires check-option. Example: dict(regex='.*UNDERLAY.*', mode='no-match')."
            )

Parsed value includes reference key value

Taking the following JMSPath expression result[0].vrfs.default.peerList[*].[$peerAddress$,localAsn,linkType] we should have a parsed output which includes only localAsn and linkType.
We also get instead the reference key

[{'7.7.7.7': {'peerAddress': '7.7.7.7', 'localAsn': '65130.1010', 'linkType': 'the road to seniority'}}, {'10.1.0.0': {'peerAddress': '10.1.0.0', 'localAsn': '65130.8888', 'linkType': 'external'}}, {'10.2.0.0': {'peerAddress': '10.2.0.0', 'localAsn': '65130.1100', 'linkType': 'external'}}, {'10.64.207.255': {'peerAddress': '10.64.207.255', 'localAsn': '65130.1100', 'linkType': 'external'}}]

Code review feedback

  • Is returning a tuple (<dict whose structure depends on the check type>, ,<boolean>) the ideal API for this library? Or would it make sense to define some custom classes/data structures to return, in order make things more user-friendly and more flexible for future expansion?
  • Consider removing Python 3.6 support as it's past EOL.
  • Is there a reason that CI needs to run pylint and pytest in containers, but the other linting can run with INVOKE_LOCAL=True?
  • In pyproject.toml, please disable pylint rules by name rather than by message number, and consider adding comments to explain why rules are disabled - it's not at all obvious for example what rule E5110 might be or why it's disabled.
  • Looks like tests/unit/ and tests/integration directories are just cookiecutter boilerplate - consider deleting them.
  • It's not obvious to me why flatten_list flattens [[[[-1, 0], [-1, 0]]]] to [[-1, 0], [-1, 0]] rather than say [-1, 0, -1, 0] - consider adding more details to its docstring to make it clearer what it considers to be a "flattened" list and what it doesn't.
  • Looks like exclude_filter as implemented accepts either a dict or a list for the data parameter, but the type annotations say it takes a Mapping - perhaps this should be Union[Dict, List] instead?
  • Personally I find the heavily parameterized tests a bit hard to read and understand (for example, test_diff_generator.py - lots of inputs and expected outputs, very little explanation of why these are the expected outputs for any given set of inputs). Not asking for a refactor/redesign here necessarily, just something to consider. Maybe just adding a few comments to each test case might be sufficient.
  • For test_diff_generator and test_operators especially, is there a reason that the pre/post data are stored as separate JSON files, while the other test data are defined inline in the test scripts? Might be more consistent and more maintainable to either store it all as JSON or all as code.
  • test_sw_upgrade_device_state - would renaming test_is_passed to check_should_pass be more accurate?
  • Is there testing for any of the functions in netcompare.utils.diff_helpers? I see TODOs for a couple of them but I may have overlooked testing for the ones that don't have TODOs either.
  • Consider adding mypy to the linters in order to validate your type hinting is correct. Not blocking.
  • The functions in netcompare.utils.jmespath_parsers would benefit from more comprehensive docstrings, IMO.
  • Is there a reason that CheckType.get_value() only supports/uses exclude when output is a Dict? Given that exclude_filter explicitly can handle both dicts and lists? If this is by design, consider raising an exception in the "else" case rather than silently ignoring the exclude parameter.
  • Looks like ToleranceType will reject tolerance=0 as invalid - intentional?
  • Is there a reason tolerance must be an int, rather than say a float?
  • Should ToleranceType check that tolerance is between 0 and 100? Or are there valid use cases for a >100% tolerance?
  • For required kwargs, consider explicitly declaring them as such in the validate signature, e.g., for ToleranceType this would be def validate(*, tolerance: int). Also, is there a reason for the concrete implementations of validate to accept wildcard **kwargs at all?
  • In class Operator, referance_data should be reference_data throughout.
  • In class Operator, more detailed docstrings would be appreciated.

Reference key get overwritten by hardcoded default key.

@pszulczewski https://github.com/networktocode-llc/netcompare/pull/34 introduced a bug:

>>> from netcompare import CheckType
>>> pre_data = {
      "jsonrpc": "2.0",
      "id": "EapiExplorer-1",
      "result": [
        {
          "interfaces": {
            "Management1": {
              "lastStatusChangeTimestamp": 1626247820.0720868,
              "lanes": 0,
              "name": "Management1",
              "interfaceStatus": "connected",
              "autoNegotiate": "success",
              "burnedInAddress": "08:00:27:e6:b2:f8",
              "loopbackMode": "loopbackNone",
              "interfaceStatistics": {
                "inBitsRate": 3582.5323982177174,
                "inPktsRate": 3.972702352461616,
                "outBitsRate": 17327.65267220522,
                "updateInterval": 300,
                "outPktsRate": 2.216220664406746
              }
            }
          }
        }
      ]
    }
>>> post_data = {
      "jsonrpc": "2.0",
      "id": "EapiExplorer-1",
      "result": [
        {
          "interfaces": {
            "Management1": {
              "lastStatusChangeTimestamp": 1626247821.123456,
              "lanes": 0,
              "name": "Management1",
              "interfaceStatus": "down",
              "autoNegotiate": "success",
              "burnedInAddress": "08:00:27:e6:b2:f8",
              "loopbackMode": "loopbackNone",
              "interfaceStatistics": {
                "inBitsRate": 3403.4362520883615,
                "inPktsRate": 3.7424095978179257,
                "outBitsRate": 16249.69114419833,
                "updateInterval": 300,
                "outPktsRate": 2.1111866059750692
              }
            }
          }
        }
      ]
    }
>>> my_jmspath = "result[*].interfaces.$Management1$.interfaceStatus"
>>> post_value = my_check.get_value(output=post_data, path=my_jmspath)
>>> pre_value = my_check.get_value(output=pre_data, path=my_jmspath)
>>> pre_value
['connected']
>>> post_value
['down']
>>> result = my_check.evaluate(value_to_compare=post_value, reference_data=pre_value)
>>> result
({'index_element[0]': {'new_value': 'down', 'old_value': 'connected'}}, False)

In that case, we use $Management1$ as reference key so I would expect result to be:

({'Management1': {'new_value': 'down', 'old_value': 'connected'}}, False)

Can we have a look together unless you already know how to fix it?

Inconsitent results for comparing list of dictionaries.

fix_deepdiff_key_names() function causes deepdiff results to be overwritten with the latest one.

When comparing data structured with textfsm templates (data structure from textfsm is usually a list of dictionaries) the following bug has been identified.
When fix_deepdiff_key_names() function aggregates keys from deepdiff output it doesn't recognize different elements and overwrites these elements returning only the last identified difference.

Compare mock files to understand differences in tests/mock/textfsm_ospf_int_br
Checkout issue branch textfsm_overwritten and run test: pytest -k textfsm_ospf_int_br

diff_generator returns:

({'state': {'new_value': 'DR', 'old_value': 'BDR'}}, False)

when DeepDiff correctly identifies differences as:

{'values_changed': {"root[1]['state']": {'new_value': 'DOWN', 'old_value': 'P2P'}, "root[2]['state']": {'new_value': 'DR', 'old_value': 'BDR'}}}

`get_value` method raises exception for JMESPath expression with no results

Environment

  • Python version: 3.8.7
  • netcompare version: 0.0.1b1

Expected Behavior

When providing JMESPath expression to get_value mathod that doesn't produce result I expect to receive empty dict or None.

Observed Behavior

An exception is raised, see traceback below:

(.venv) przemek@quark:~/osrb/tests/netcompare$ /home/przemek/osrb/tests/netcompare/.venv/bin/python /home/przemek/osrb/tests/netcompare/bug.py
Traceback (most recent call last):
  File "/home/przemek/osrb/tests/netcompare/bug.py", line 7, in <module>
    pre_value = my_check.get_value(output=data, path=my_jmspath)
  File "/home/przemek/osrb/tests/netcompare/.venv/lib/python3.8/site-packages/netcompare/check_types.py", line 77, in get_value
    if not any(isinstance(i, list) for i in values):
TypeError: 'NoneType' object is not iterable

Steps to Reproduce

Use the below code to reproduce this scenario:

from netcompare import CheckType

data = {"global": {"peers": {"10.1.0.0": "peer1", "10.2.0.0": "peer2"}}}

my_jmspath = "global[*]"
my_check = CheckType.init(check_type="exact_match")
pre_value = my_check.get_value(output=data, path=my_jmspath)

jdiff should support multiple ref key anchoring

As per json below, we might have multiple vrf and for each vrf multiple peer IPs - which are unique within each vrf. In the example below, we want to anchor global and 192.168.0.0. So regex would look like: $*$.peers.$*$.*.*.[accepted_prefixes,received_prefixes,sent_prefixes]

{
  "global": {
    "router_id": "10.255.255.1",
    "peers": {
      "192.168.0.0": {
        "description": "",
        "remote_id": "10.255.255.240",
        "address_family": {
          "ipv6": {
            "received_prefixes": -1,
            "sent_prefixes": -1,
            "accepted_prefixes": -1
          },
          "ipv4": {
            "received_prefixes": -1,
            "sent_prefixes": -1,
            "accepted_prefixes": -1
          }
        },
        "remote_as": 30000,
        "is_up": true,
        "local_as": 30001,
        "is_enabled": true,
        "uptime": 699262

To have `jmspath` arg mandatory

Based on this discussion, I believe would be great to have jmspath arg mandatory. That would keep code consistent and make less confused the user regarding the functionality of such arg. In case we would need raw diff, * is possible to use it

jdiff not ble to catch reference key in dict of dicts

Environment

  • Python version: 3.7
  • jdiff version: 0.0.2

Expected Behavior

As it stands now, jdiff can catch a reference key only in a list of dictionary. There are case where NAPALM returns dict of dicts. In the example below, we want .local. and .local..0 to be the ref key of is_up

{
  ".local.": {
    "description": "",
    "is_enabled": true,
    "is_up": true,
    "last_flapped": -1,
    "mac_address": "Unspecified",
    "mtu": 0,
    "speed": -1
  },
  ".local..0": {
    "description": "",
    "is_enabled": true,
    "is_up": true,
    "last_flapped": -1,
    "mac_address": "Unspecified",
    "mtu": 0,
    "speed": -1
  },
}

@pszulczewski FYI

Fix `mypy` errors

As suggested by OSRB code review, we have implemented mypy in tests. We need to fix all the errors returned

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.