Giter VIP home page Giter VIP logo

network_traffic_modeler_py3's Introduction

PyPI Build Status Coverage Status Documentation Status

pyNTM: network_traffic_modeler_py3

  • How will a failure on your wide area network (WAN) affect link utilizations?
  • What about a layer 3 node failure?
  • Will all of your RSVP auto-bandwidth LSPs be able to resignal after a link/node failure?
  • Can you WAN handle a 10% increase in traffic during steady state? What about failover state?

These questions are non-trivial to answer for a medium to large size WAN that is meshy/interconnected, and these are the exact scenarios that a WAN simulation engine is designed to answer.

This is a network traffic modeler written in python 3. The main use cases involve understanding how your layer 3 traffic will transit a given topology. You can modify the topology (add/remove layer 3 Nodes, Circuits, Shared Risk Link Groups), fail elements in the topology, or add new traffic Demands to the topology. pyNTM is a simulation engine that will converge the modeled topology to give you insight as to how traffic will transit a given topology, either steady state or with failures.

This library allows users to define a layer 3 network topology, define a traffic matrix, and then run a simulation to determine how the traffic will traverse the topology, traverse a modified topology, and fail over. If you've used Cariden MATE or WANDL, this code solves for some of the same basic use cases those do. This package is in no way related to those, or any, commercial products. IGP and RSVP routing is supported.

pyNTM can be used as an open source solution to answer WAN planning questions; you can also run pyNTM alongside a commercial solution as a validation/check on the commercial solution.

Training

See the training modules at https://github.com/tim-fiola/TRAINING---network_traffic_modeler_py3-pyNTM-

Documentation

See the documentation on Read the Docs.

Examples

See the example directory.

Install

Install via pip:

pip3 install pyNTM

For upgrade:

pip3 install --upgrade pyNTM

pyNTM Model Classes

In pyNTM, the Model objects house the network topology objects: traffic Demands, layer 3 Nodes, Circuits, Shared Risk Link Groups (SRLGs), Interfaces, etc. The Model classes control how all the contained objects interact with each other during Model convergence to produce simulation results.

There are two subclasses of Model objects: the PerformanceModel object and the newer FlexModel object (introduced in version 1.6).
Starting in version 1.7, what used to be called the Model class is now the PerformanceModel Class. The former Parallel_Link_Model class is now known as the FlexModel class.
There are two main differences between the two types of objects:

  • The PerformanceModel object only allows a single Circuit between two layer 3 Nodes; while the FlexModel allows multiple Circuits between the same two Nodes.
  • The Performance Model will have better performance (measured in time to converge) than the FlexModel. This is because the FlexModel has additional checks to account for potential multiple Circuits between Nodes and other topology features such as IGP shortcuts.

The legacy Model and Parallel_Link_Model should still work as they have been made subclasses of the PerformanceModel and FlexModel classes, respectively.

The PerformanceModel class is good to use for the following topology criteria:

  • There is only one link (Circuit) between each layer 3 Node
  • IGP-only routing and/or RSVP LSP routing with no IGP shortcuts (traffic source and destination matches LSP source and destination)

Which Model Class To Use

All model classes support:

  • IGP routing
  • RSVP LSPs carrying traffic demands that have matching source and destination as the RSVP LSPs
  • RSVP auto-bandwidth or fixed bandwidth
  • RSVP LSP manual metrics

The PerformanceModel class allows for:

  • Single Circuits between 2 Nodes
  • Error messages if it detects use of IGP shortcuts or multiple Circuits between 2 Nodes

The FlexModel class allows for:

  • Multiple Circuits between 2 Nodes
  • RSVP LSP IGP shortcuts, whereby LSPs can carry traffic demands downstream, even if the demand does not have matching source and destination as the LSP

In some cases, it's completely valid to model multiple Circuits between Nodes as a single Circuit. For example: in the case where there are multiple Circuits between Nodes but each Interface has the same metric and the use case is to model capacity between Nodes, it's often valid to combine the Circuit capacities and model as a single Circuit. In this case, the PerformanceModel object is recommended as it will give better performance.

If it is important to keep each Circuit modeled separately because the parallel Interfaces have different metrics and/or differences in their capabilities to route RSVP, the FlexModel is the better choice.

If there is any doubt as to which class to use, use the FlexModel class.

Optimization

There are two main areas where we are looking to optimize:

  • Performance - converging the model to produce a simulation, especially in a model with RSVP LSPs, is intensive. Improving the time it takes to converge the simulation results in better productivity and improved user experience
    • pyNTM supports the pypy3 interpreter, which results in 60-90% better performance than the python3 interpreter
    • In this case, performance refers to how long it takes to converge a model to produce a simulation; for larger models, pypy3 provides much better performance
  • Data retrieval - the simulation produces an extraordinary amount of data. Currently, the model is only retaining a fraction of the data generated during the model convergence. It's our goal to introduce something like an sqlite database in the model objects to hold all this information. This will improve user experience and allow SQL queries against the model object.

Visualization

Info about the new WeatherMap class that provides visualization is available in the wiki: https://github.com/tim-fiola/network_traffic_modeler_py3/wiki/Visualizing-the-network-with-the-WeatherMap-Class

License

Copyright 2019 Tim Fiola

Licensed under the Apache License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0

network_traffic_modeler_py3's People

Contributors

fooelisa avatar tim-fiola avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

network_traffic_modeler_py3's Issues

Support for MultiDigraph in Model

Hello,

Great work on!!!

Would be great to add support for multiDigraph in the model. This is required to simulate parallel links between nodes.

Regards

Improve performance by storing LSP traffic on the LSP object itself when converging the model

For large models (# LSPs is large, say 11000), this call below takes a long time to run:

lsp.traffic_on_lsp(model)

I think we can get around this by storing info when we converge the model. I need to look at the source code more, but perhaps adding a _traffic_on_lsp list attribute to the RSVP_LSP when converging the model to store the demands and proportions on the RSVP_LSP so it does not have to be recomputed again.

I'm not sure this is possible, but look into it.

Add cost to LSP Object

Hello,

Feature request to add cost to LSP object. This will help with LSP between two host that have different metric( corner case i know).
It will also help with equal cost LSP later in the simulation if there is a need to load balance across LSP.

Regards,

can not get the visualizations

>>> from graph_network import graph_network_interactive
>>> graph_network_interactive.make_interactive_network_graph(model1)
/Users/xxxx/.local/lib/python3.7/site-packages/mpld3/plugins.py:675: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3,and in 3.9 it will stop working
  if isinstance(entry, collections.Iterable):
>>> Encountered exception.
main thread is not in main loop

This may be due to an mpld3 bug described in the link below:
https://github.com/mpld3/mpld3/issues/434

To overcome this bug, run the following command from the CLI to
get the mpld3 patch from github:
python3 -m pip install --user "git+https://github.com/javadba/mpld3@display_fix"

Hi, I run the demo and try to get the visualizations but got this error, seems to be the problem of mpld3 that something is deprecated, I run the suggested line python3 -m pip install --user "git+https://github.com/javadba/mpld3@display_fix" but did not repair this.

have you encountered this problem and how you solved it? Thanks a lot : )

total capacity between nodes

Hi,

I am working with a network that some edges are uni-directional but some are bi-directional that also have a sum capacity, For example, if node A and node B is connected with an edge, flow from A to B and from B to A are all possible through this edge, this edge did not have a directional capacity but have a sum capacity, namely the sum of flow from A to B and flow from B to A can not exceed the capacity. could I find a solution to model this in pyNTM?

Thanks a lot!

Feature request: automatic creation of required LSPs based on provided traffic demands

I have been exploring PyNTM lately and I love it, it provides great insights into network routing and on top of that it has helped me to improve my Python skills since I am a Java backend engineer by trade.
I see that autoconfiguration of the LSPs reserved bandwidth is supported (currently the LSP definitions need to be provided by the user), so I wonder if the automatic creation of LSPs based on traffic demands (and the network topology, of course, and maybe also some user-defined constraints) is in the PyNTM roadmap (sometimes this feature is called automesh).

As an example, a user would submit the following query to PyNTM:

  • given a network topology and a set of demands, please calculate the required LSPs so that the standard deviation of the network link utilizations is as small as possible.

Install issues caused by numpy evolutions.

install with recent numpy, causes following error message : AttributeError: module 'numpy' has no attribute 'int'.
np.int was a deprecated alias for the builtin int. To avoid this error in existing code, use int by itself. Doing this will not modify any behavior and is safe. When replacing np.int, you may wish to use e.g. np.int64 or np.int32 to specify the precision. If you wish to review your current use, check the release note link for additional information.
The aliases was originally deprecated in NumPy 1.20; for more details and guidance see the original release note at:

code probably need some update.

There is a problem with the construction of the WeatherMap class with __main__, causing some unexpected behavior

2 weird issues:

  1. The create_weathermap() call must be used within a script. If called at the live python3 CLI, the user will most likely receive an error message about not being able to find main
  2. Calling create_weathermap() in a script causes the entire script to be run 2x

Example of item 1:

>>> from pyNTM import FlexModel
>>> from pyNTM import WeatherMap
>>> 
>>> model = FlexModel.load_model_file('model_test_topology_multidigraph.csv')
>>> 
>>> print("updating simulation manually")
updating simulation manually
>>> model.update_simulation()
Routing the LSPs . . . 
Routing 1 LSPs in parallel LSP group F-E; 1/3
Routing 1 LSPs in parallel LSP group A-F; 2/3
Routing 2 LSPs in parallel LSP group A-D; 3/3
LSPs routed (if present) in 0:00:00.001311; routing demands now . . .
Demands routed in 0:00:00.009475; validating model . . . 
>>> 
>>> vis = WeatherMap(model)
>>> 
>>> vis.create_weathermap()

*** NOTE: The make_visualization_beta function is a beta feature.  It may not have been as 
extensively tested as the pyNTM code in general.  The API calls for this may also 
change more rapidly than the general pyNTM code base.



Visualization is available at http://127.0.0.1:8050/


Dash is running on http://127.0.0.1:8050/

 * Serving Flask app "pyNTM.weathermap" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: on
/Users/tfi/Documents/python_crap/test_3.0/venv/bin/python3: can't find '__main__' module in '/Users/tfi/Documents/python_crap/test_3.0'

Example of item 2:

(venv) Timothys-Mini:test_3.0 timothyfiola$ python3 -i simple_test.py 
updating simulation manually
Routing the LSPs . . . 
Routing 1 LSPs in parallel LSP group F-E; 1/3
Routing 1 LSPs in parallel LSP group A-F; 2/3
Routing 2 LSPs in parallel LSP group A-D; 3/3
LSPs routed (if present) in 0:00:00.001266; routing demands now . . .
Demands routed in 0:00:00.009542; validating model . . . 
making visualization

*** NOTE: The make_visualization_beta function is a beta feature.  It may not have been as 
extensively tested as the pyNTM code in general.  The API calls for this may also 
change more rapidly than the general pyNTM code base.



Visualization is available at http://127.0.0.1:8050/


Dash is running on http://127.0.0.1:8050/

 * Serving Flask app "pyNTM.weathermap" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: on
updating simulation manually
Routing the LSPs . . . 
Routing 1 LSPs in parallel LSP group A-F; 1/3
Routing 2 LSPs in parallel LSP group A-D; 2/3
Routing 1 LSPs in parallel LSP group F-E; 3/3
LSPs routed (if present) in 0:00:00.001274; routing demands now . . .
Demands routed in 0:00:00.009616; validating model . . . 
making visualization

*** NOTE: The make_visualization_beta function is a beta feature.  It may not have been as 
extensively tested as the pyNTM code in general.  The API calls for this may also 
change more rapidly than the general pyNTM code base.



Visualization is available at http://127.0.0.1:8050/


IndentationError: expected an indented block in flex_model.py (dev branch)

Hello Tim,

there seems to miss something at line 503 in file flex_model.py:

Traceback (most recent call last):
File "test_traffic_eng_features.py", line 8, in
from pyNTM import PerformanceModel
File "", line 983, in _find_and_load
File "", line 967, in _find_and_load_unlocked
File "", line 668, in _load_unlocked
File "", line 638, in _load_backward_compatible
File "/home/martin/simu_v2/lib/python3.7/site-packages/pyNTM-2.0-py3.7.egg/pyNTM/init.py", line 14, in
File "", line 983, in _find_and_load
File "", line 963, in _find_and_load_unlocked
File "", line 906, in _find_spec
File "", line 1280, in find_spec
File "", line 1254, in _get_spec
File "", line 1235, in _legacy_get_spec
File "", line 441, in spec_from_loader
File "", line 594, in spec_from_file_location
File "/home/martin/simu_v2/lib/python3.7/site-packages/pyNTM-2.0-py3.7.egg/pyNTM/flex_model.py", line 503
max_split = max([split for split in traffic_splits_per_interface.values()])
^
IndentationError: expected an indented block

Kind Regards...
Martin

Support for IGP shortcuts in Demand

Hello,

Feature request

Add support for IGP shortcuts in the model demand. This will help with networks that have a mix of RSVP-LSP and LDP based traffic where only the core is RSVP and the rest is still LDP( IGP routed). In the current implementation if we have a topology A-B-C , where there is a LSP from A-B but the demand is A-C the model will not put traffic on the LSP but rather use the IGP shortest path between A-C.

Having a LSP flag ( shortcut ) will help in simulating this scenario , where maybe the model will look at the path between A-C , the LSP objects that start at A and have a tailend destination in the A-C path and use that LSP ( A-B) then switch to IGP shortest path once B is reached( ie: B-C will still be igp routed).

Hope this makes sense.

Regards,

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.