open-traffic-generator / models Goto Github PK
View Code? Open in Web Editor NEWOpen Traffic Generator Core Models
License: MIT License
Open Traffic Generator Core Models
License: MIT License
"What is 'reserved' field in ipv4 header format? Also "options" field is missing in the header."
Now line rate should be of type integer. But in some cases we need to provide line rate less than 1.
Add the equivalent of setting in IxN the crc attribute of a configElement
https://10.36.75.5/ixnetworkweb/ixnrest/#/api-browser/api/v1/sessions/88/ixnetwork/traffic/trafficItem/1/configElement/1
Basic authentication support (with sane multiuser workflow):
• User will login with intended username in the beginning of the script
• They need not provide password (if they do, it’ll be ignored)
• User implicitly owns resources described in their config after POST /config
• If POST /config contains resources already owned by another user, respective error will be returned
• Re-logging with same username (without having logged out previously) will release any previously owned resources
• There’s no predefined list of usernames, nor will anybody be allowed to create it – hence, usernames will just be validated for having a certain length and containing alphanumeric chars
User will logout in the end of script
• If user was logged in, it will release previously owned resources
• If user was not logged in, it’s a no-op
• If user was logged in and didn’t explicitly logout, the resources will be released after X minutes of inactivity (or when apiserver is restarted)
No extra action required by user to send consecutive requests
• OpenAPI will implicitly insert username / password in header for each request
In any case, requests with header containing username that isn’t logged in will error out
Stats results should go back to returning simple key-value pairs instead of returning table view – because it really adds more boilerplate to the script.
• One has to find indices of each column name
• One has to make an assumption on type of returned value and cast it accordingly (since everything is returned as strings)
I think it’s possible to implement GraphQL-like query-response in OpenAPI (to fetch only intended column).
The possible issues I can think of are:
• Same key-names inside stat for each flow/port will be returned
• We might by default expose all stats that are not applicable for a given use case (but this should not be much of a problem since users will focus on stats they’re interested in)
e.g. see snippet below for comparison in minimum use-case
# fetch stats
result = results_api.get_port_results(
ResultPortRequest(
port_names=[p.name for p in PORTS],
columns=['name', 'frames_tx', 'frames_rx']
)
)
# extract and compare stats using key-value pair
assert result.rows[0]['frames_tx'] == result.rows[1]['frames_rx'] == count
# extract and compare stats using table view
tx = result.columns.index('frames_tx')
rx = result.columns.index('frames_rx')
assert int(result.rows[0][tx]) == count and int(result.rows[1][rx]) == count
Is there any inherent benefit in having two separate counter patterns, i.e. Increment and Decrement ?
Current:
ip = FlowIpv4(src=FlowPattern(choice='increment', increment=FlowIncrement(start='1.1.1.1', step='0.0.0.1', count=10)))
ip = FlowIpv4(src=FlowPattern(choice='decrement', increment=FlowDecrement(start='1.1.1.10', step='0.0.0.1', count=10)))
Proposed:
ip = FlowIpv4(src=FlowPattern(choice='counter', counter=Counter(start='1.1.1.1', step='0.0.0.1', count=10, up=True)))
ip = FlowIpv4(src=FlowPattern(choice='counter', counter=Counter(start='1.1.1.10', step='0.0.0.1', count=10, up=False)))
The TCP packet header is missing the 16-bit window parameter.
@ajbalogh
"delay" field is missing in "burst" duration.
Also, for rest of the choices (fixed_packets, fixed_seconds, continuous and burst) for the field of "gap", don't we have to mention the gap unit?
There's a team in India who are trying to run fuzzed testing on OTG API and what they need is that all examples be available for most nodes.
There are two known problems:
We can fix first problem. We can't fix second problem, but it definitely will affect the correctness.
FlowPattern
accepts field inputs as strings regardless of whether the field is numeric
(e.g. VLAN ID, TCP port) or string
(IP/MAC address).
The proposal is to use something like StrField
and NumField
instead. Also, Field
keyword seems to be more relevant than Pattern
.
NOTE: This will also require having separate models for Random
and Counter
Current:
ip = FlowIpv4(src=FlowPattern(choice='fixed', fixed='1.1.1.1'))
ip = FlowIpv4(src=FlowPattern(choice='list', list=['1.1.1.1', '1.1.1.2']))
vlan = FlowVlan(id=FlowPattern(choice='fixed', fixed='10'))
vlan = FlowVlan(id=FlowPattern(choice='list', list=['10', '11']))
Proposed:
ip = FlowIpv4(src=StrField(choice='fixed', fixed='1.1.1.1'))
ip = FlowIpv4(src=StrField(choice='list', list=['1.1.1.1', '1.1.1.2']))
vlan = FlowVlan(id=NumField(choice='fixed', fixed=10))
vlan = FlowVlan(id=NumField(choice='list', list=[10, 11]))
I need support to verify traffic received ok (expected rx port received all transmitted packets and all other ports not in the flow did not receive any packet)
In Counter
the step is str
whereas in Random
the step is int|float
. Not able to get any usecase for defining diff datatype
Should API server always provide latency measurement by default ? If not, how does a user specify that they want to turn off latency measurements ?
Latency measurement demands frame sizes to be more than 64B in certain cases (when Ethernet/IP/TCP is configured, minimum size increases to 80B) - from user's perspective, they'll just get a warning that frame size has been adjusted.
What if user wants to exercise 64B frame size and doesn't really care about latency measurements ?
Currently we dont have any option to set pattern like increment/decrement on IP identification field. This should be helpful to mark packet source end (say in the incremental sequence). At the receiving end we should check if the identification field of the packets are in incremental sequence or not to check if the packets are received in sequence.
Should not the values (enum) 'valid checksum' and 'invalid checksum' suffice instead of pattern ?
This is what I intended to do:
packet=[
FlowHeader(choice='ethernet', ethernet=FlowEthernet()),
FlowHeader(choice='ipv4', ipv4=FlowIpv4()),
# this is completely made up and not a part of spec
# but it's supposed to assign a stream of 10 'AB' (and repeat the same until it hits the checksum offset)
FlowHeader(choice='payload', custom=FlowPayload(choice='hexstr', hexstr='AB'*10))
]
Although it's technically incorrect to specify payload
as a FlowHeader
container, but let's assume that we're allowed to do that for now. The closest thing I could find was FlowCustom
. But I encountered some limitations.
# According to spec, this will overlay the bytes from first offset of ethernet fragment.
FlowCustom(bytes='AB'*10)
# This does the job, but I have to explicitly specify the start offset of payload
FlowCustom(patterns=[FlowBitPattern(choice='bitlist', bitlist=FlowBitList(offset=34, values='AB'*10)])
# This also does the job, but it has length limitation and also requires start offset.
FlowCustom(patterns=[FlowBitPattern(choice='bitcounter', bitcounter=BitCounter(offset=34, ...)])
I believe it's important for user to not have to deduce payload offset when specifying custom payload.
These are my observation:
>> flowHeaderList = config.flows[0].packet
>> type(flowHeaderList)
<class 'snappi.flowheaderlist.FlowHeaderList'>
>> type(flowHeaderList[0])
<class 'snappi.flowethernet.FlowEthernet'>
>> type(flowHeaderList._items[0])
<class 'snappi.flowheader.FlowHeader'>
It looks like snappi is returning exact header (in this case 'FlowEthernet') rather than FlowHeader when iterate through FlowHeaderList though internal _items actually store FlowHeader.
IxNetwork concreate normally iterate through list object to check the choice of that object and take decision. say for this case
for flow_header in flowHeaderList :
flow_header.choice
Please let me know if this is expected behavior then we will handle this accordingly.
Looks like Layer1.EthernetVirtual was left out from Layer1 config.
The model does not define contents of the payload and leaves it to be implementation specific. That might cause scripts to be non-portable across different implementations. Need to have the model specify the default.
The package name in protobuf generation should be replaced with something specific to this open model effort.
Along with port_name (mandatory field), firstFrame and lastFrame should be added as optional field(s) because user might be interested in getting a subset of packets received as captured result.
When only port_name is given, all the packets should be returned in captured result.
@ankur-sheth @winstonliu1111 @ajbalogh
We specifically opted out of meshing
and bidirectional
config to make sure:
meshing
and bidirectional
config can still be pro-grammatically achievedSo, why should we still expect list of Rx port names instead of a single Rx port name for each flow ?
# multiple rx port names per flow
flows=[Flow(name='f1', tx_rx=FlowTxRx(tx_port_name='p1', rx_port_names=['p2', 'p3']))]
# single rx port name per flow
flows=[
Flow(name='f1', tx_rx=FlowTxRx(tx_port_name='p1', rx_port_name='p2')),
Flow(name='f2', tx_rx=FlowTxRx(tx_port_name='p1', rx_port_name='p3'))
]
One possible answer is Terse code
, but we could've achieved even terser code by including options for multiple tx port names
, meshing
and bidirectional
config.
What other reasons do we have to expect multiple rx port names
for each flow ?
Because, this IMO feels like a loose end that brings back the complexity that I mentioned above (first two points).
Need support for clear stats with ref to this issue
As a user,
And I expect to assert on these conditions to make sure that they were met.
Customer requested support for MPLS encapsulated traffic.
In the capture model in include the support to save the file in text/hex-dump format as well (if possible)
Create lag models.
Add support for adding/configuring protocol templates in /ixnetwork/traffic/trafficItem/<>/configElement/<>/stack
Adding @ankur-sheth @winstonliu1111 for inputs if any.
Flow
in athena is analogous to FlowGroup
in IxNetwork, because Rate
, Size
and Duration
can be independently controlled per FlowGroup
(which internally means, they're mapped to one or more hardware streams). Packet flows belonging to different PGID
s then constitute what IxNetwork calls Flow
.
PGID
s are usually internally used to track statistics for:
FlowGroup
IxNetwork, by default, only tracks port statistics.
Flow
statistics (even if one doesn't need it).Flow
stats, noting that ingress field stats are not part of
per flow stats, it's not clear from doc how to filter out some rows in returned results.e.g. From below, at first glance, the filter option should help reduce number of rows (and it does when viewing flow stats). But upon closer examination, it will just affect aggregation value when viewing ingress field stats.
Hence, FlowStatRequest should at least allow filtering based on number of rows (irrespective of whether it's per flow or per field value)
# Two flows are configured, each with 4 incr UDP Source Port values, which repeats across 100 packets.
# Stats per Flow (flow_names=['f1', 'f2'])
flow frames_rx
f1 100
f2 100
# Stats per Flow (flow_names=['f1'])
flow frames_rx
f1 100
# Flow stats with "UDP sport" as ingress name (flow_names=['f1', 'f2'])
flow UDP sport frames_rx
f1 5000 25
f1 5001 25
f1 5002 25
f1 5003 25
f2 5000 25
f2 5001 25
f2 5002 25
f2 5003 25
# Flow stats with "UDP sport" as ingress name (flow_names=['f1'])
flow UDP sport frames_rx
f1 5000 25
f1 5001 25
f1 5002 25
f1 5003 25
# How to get this table ? Stats per UDP Source Port (flow_names=['f1', 'f2'])
UDP sport frames_rx
5000 50
5001 50
5002 50
5003 50
# How to get this table ? Stats per UDP Source Port 5001 and 5003 (flow_names=['f1', 'f2'])
UDP sport frames_rx
5001 50
5003 50
NOTE: I am not requesting a fix here yet, just wanted to discuss some possibilities.
Currently we describe packet headers like so:
packet=[
FlowHeader(choice='ethernet', ethernet=FlowEthernet()),
FlowHeader(choice='vlan', vlan=FlowVlan()),
FlowHeader(choice='vlan', vlan=FlowVlan()),
FlowHeader(choice='ipv4', ipv4=FlowIpv4()),
]
An alternative could be:
packet = Packet(
ethernet=FlowEthernet(),
vlans=[FlowVlan(), FlowVlan()]
ipv4=FlowIpv4()
)
Since all fields in Packet
are null-able, it may not require explicit choice
input. Moreover, its structure is very similar to FlowHeader
, except it accepts multiple parameters. So it's terser in config and more compact on wire.
In fact, I can access fields like so: packet.ethernet.src.fixed
But all that can be considered to just be a gimmick. Hence, what advantages does former has over latter ?
Follow the example for,
src: https://github.com/protocolbuffers/protobuf
build: https://github.com/protocolbuffers/protobuf/releases/tag/v3.13.0
@ajbalogh I'm assigning it to myself. Is it ok to,
Add a step to the workflow action to validate the generated .proto files using protoc.
To support one lead customer for Athena, we need to add support for the GTP encapsulation.
Currently single instance of ResultPort
and ResultFlow
is being returned instead of array.
@ajbalogh FYI, I'm directly making the changes since it's blocking some progress on our end. Please review.
TBD
Need API for fetching flow's transmit state for cases where next action is executed depending upon the learned traffic state.
The supported columns do not enlist state of traffic for Result.FlowRequest currently.
To track flows using source MAC addr
and source IP addr
, here's how currently it's done:
packet = [
FlowHeader(choice='ethernet', ethernet=FlowEthernet(), group_by=[FlowGroupBy(field='src', label='Source MAC Addr')]),
FlowHeader(choice='ipv4', ipv4=FlowIpv4(), group_by=[FlowGroupBy(field='src', label='Source IP Addr')),
]
Problems are:
label
track_by
vs group_by
- subjectiveWe could essentially configure it as part of Flow
instead of Header
, so that all the fields can be provided in one place. (It also goes along well with how we return group_by stats - i.e. they're directly contained within flow stats itself)
In fact, we could define a list of ENUMs which will be descriptive enough and not require label
input.
FlowFlow(
packet=[
FlowHeader(choice='ethernet', ethernet=FlowEthernet()),
FlowHeader(choice='ipv4', ipv4=FlowIpv4()),
],
# fields is a list of enums
tracking=Tracking(fields=['src_mac_addr', 'src_ip_addr'])
)
Lastly, TrackBy
or GroupBy
should be extensible to include stats with custom offsets (e.g. group by values belonging to offset 16-18) and basic latency configuration (this is not a hard requirement in first phase as I understand).
e.g.
Tracking(
fields=['src_mac_addr', 'src_ip_addr', 'custom'],
# configure two latency bins
latency=Latency(
method='cut_through',
bins=[
LatencyBin(min=10, max=100),
LatencyBin(min=1000, max=10000)
]
),
custom=SomeContainer(...)
)
All TBD descriptions in the bgp model files need to be filled in with meaningful descriptions.
Rework the workflow so it does the following:
fails on any bundle errors
fails on any protobuf generation errors
fails on any client generation errors
uploads artifacts to repo
Customer requested support for GUE (https://tools.ietf.org/html/draft-ietf-intarea-gue-09).
We need a way to specify the contents of the packet. Say a packet is 1024 bytes and all I've done is added Ethernet and IP headers to it. My understanding is that the rest of the packet is not specified in the API and will be implementation dependent. Do we need a way for the user to describe the contents of the rest of the packet?
We recently got request from a team using athena, to allow adding :
and .
(and maybe /
) to flow names.
Reasoning being, it's quite common to label flow names with src/dst IP/MAC addr (and forward slash to indicate subnet).
I agree that it makes sense to allow them for flow names, but I'm not sure if this makes sense for all kinds of names.
Start with route advertise, route withdraw.
We need to add support for GRE, VxLAN and IPv6 headers to the model. This is required for some use-cases at Nokia.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.