Giter VIP home page Giter VIP logo

odp's People

Contributors

anoobj avatar asasidharan avatar ashwinyes avatar bala-manoharan avatar barryspinney avatar bill-fischofer-linaro avatar brbrooks avatar caviumncd avatar ciprian-barbu-linaro avatar erachmi avatar gbalakrishna avatar ikhorn avatar jannepeltonen avatar jereleppanen avatar jerinjacobk avatar lifang-zhang avatar malvika-gupta avatar matiaselo avatar mike-holmes-linaro avatar muvarov avatar mvl-skardach avatar nithind1988 avatar nmorey avatar psavol avatar robbieking64 avatar roxell avatar takondra avatar tianyuli0 avatar tuomastaipale avatar zoltankisslinaro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

odp's Issues

Unexpected IV causes IPsec API validation to fail

Janne Peltonen 2018-10-08 12:15:36 UTC
IPsec validation code expects ODP implementation to use the same IV as in the test vectors in the output ESP and AH packets. This causes valid implementations to fail validation since the implementation can pick the IV value in any standard compliant way it wants.

Comment 1 Bill Fischofer 2018-10-09 12:17:01 UTC
Reassigning to Dmitry for review/comment.

Comment 2 Dmitry Eremin-Solenikov 2018-12-06 13:17:50 UTC
Janne, could you please specify, which test cases are you referring to?
I've tried not to enforce IV.

Comment 3 Janne Peltonen 2018-12-10 13:23:23 UTC
I am referring to all IPsec output tests. The test cases compare the output packet produced by the IPsec implementation against the expected output packet defined in the test vector. The packets match only if the IPsec implementation chooses the same IV that is in the test vector as otherwise the encrypted bytes are totally different.

You can see which test cases could fail with other implementations by changing the IV generation a little, like this, and running make check:

--- a/platform/linux-generic/odp_ipsec.c
+++ b/platform/linux-generic/odp_ipsec.c
@@ -1032,6 +1032,7 @@ static int ipsec_out_iv(ipsec_state_t *state,
                        return -1;
        }

+       state->iv[0] += 1;
        return 0;
 }

Comment 4 Dmitry Eremin-Solenikov 2018-12-10 14:44:58 UTC
Janne,

I've rechecked IPsec testsuite. Please correct me if I'm wrong. Output-with-compare tests fall into two major groups:

  • Null encryption + HMAC-SHA-somthing. These test vectors do not require IV at all and thus should generate the same package in all circumstances

  • Null encryption + AES-GMAC. This really looks like a mistake from my side, I will correct these test cases not to use predefined test packages to compare against.

Did I miss any of the tests which use ipsec_check_out_one?

Comment 5 Janne Peltonen 2018-12-10 15:29:38 UTC
Sound right to me.

Maybe the corrected test cases could still do the full packet comparison if they notice that the IV is the same as in the test vector. This way the current implementation would get some extra test coverage.

fork() in fdserver is problematic

platform/linux-generic/odp_fdserver.c forks a child process to manage shared memory. This is done to make shared memory work even if ODP threads are actually OS processes. The fork happens inside odp_init_global() and is not followed by exec.

Doing fork in a multithreaded program is somewhat problematic (see fork(2) and http://www.linuxprogrammingblog.com/threads-and-fork-think-twice-before-using-them). In particular, the child process should call only async-signal-safe functions until exec. The fdserver implementation is not adhering to this rule and can (at least theoretically) e.g. get stuck waiting indefinitely a mutex that was held by another thread at fork time and will never be freed in the child.

One possible fix would be to add such a restriction to ODP API that the calling process must be single threaded when it calls odp_init_global(). Another would be to have fdserver exec the actual server after the fork.

Relative paths in linux-dpdk should be replaced

Many include paths in linux-dpdk make long relative references that assume a fixed directory structure. This will cause problems with future directory restructure and should be replaced with more symbolic references that are resolved at config time.

Segmentation fault when running odp

Hello. When I try to run a webserver with odp, it show the segmentation fault (core dumped).

HW time counter freq: 3408006263 hz

_ishm.c:881:_odp_ishm_reserve():No huge pages, fall back to normal pages. check: /proc/sys/vm/nr_hugepages.
PKTIO: initialized loop interface.
PKTIO: initialized pcap interface.
PKTIO: initialized ipc interface.
PKTIO: initialized socket mmap, use export ODP_PKTIO_DISABLE_SOCKET_MMAP=1 to disable.
PKTIO: initialized socket mmsg,use export ODP_PKTIO_DISABLE_SOCKET_MMSG=1 to disable.

ODP system info
---------------
ODP API version: 1.17.0
CPU model:       Intel(R) Core(TM) i7-6700 CPU
CPU freq (hz):   3407880000
Cache line size: 64
Core count:      4

Running ODP appl: "ofp_netwrap"
-----------------
IF-count:        1
Using IFs:       ens33

/home/admin/Downloads/ofp/scripts/ofp_netwrap.sh: line 8:  7365 Segmentation fault      (core dumped) LD_PRELOAD=libofp_netwrap_crt.so.0.0.0:libofp.so.0.0.0:libofp_netwrap_proc.so.0.0.0 $@

I set up odp on the virtual machine based on Vmware, and I have allocate the virtual machine 4 cores and 2GB memory.

Does anyone have some idea? Thanks.

IPsec didn't work like explained in README

Hello everyone, I'm newbie on ODP, I've found the ipsec program (on /examples/ipsec/ipsec) and read it's readme, after a few times reading it I've created 3 containers on docker and tried to follow the step by step, now I have 3 VMs and 2 networks like on readme example and recreated the steps with my machine and networks, but it didn't work, I ran the odp_ipsec in the middle of the 2 netwoks, and one ipsec configuration on one of my machines with the setkey (ipsec-tools I guess), the odp_ipsec just run de executable and exit the program, without anything explaining what to do, and the packets are sended like the odp is disabled (if I disable the ipsec in the other machine), can someone help me with that? Sorry about my english, thank you all

Generic failure on PPC64el

Dmitry Eremin-Solenikov 2018-09-03 09:49:34 UTC
All tests fail on PPC64el with the following error:

FAIL: time/time_main
====================

odp_system_info.c:314:systemcpu():Cache line sizes definitions don't match.
odp_system_info.c:364:odp_system_info_init():systemcpu failed
odp_init.c:256:odp_init_global():ODP system_info init failed.
error: odp_init_global() failed.
FAIL time/time_main (exit status: 255)

ODP build fails with GCC 10.1

GCC 10 is available in Fedora 32 or Archlinux.

Log:

CC odp_schedule_basic.lo
In file included from ../../platform/linux-generic/include/odp_ring_u32_internal.h:19,
from odp_schedule_basic.c:25:
odp_schedule_basic.c: In function ‘schedule_term_global’:
../../platform/linux-generic/include/odp_ring_internal.h:133:20: error: array subscript ‘’ is outside the bounds of an interior zero-length array ‘uint32_t[0]’ {aka ‘unsigned int[]’} [-Werror=zero-length-bounds]
133 | *data = ring->data[new_head & mask];
| ~~~~~~~~~~^~~~~~~~~~~~~~~~~
../../platform/linux-generic/include/odp_ring_internal.h:48:11: note: while referencing ‘data’
48 | uint32_t data[0];
| ^~~~
odp_schedule_basic.c: In function ‘schedule_pktio_start’:
../../platform/linux-generic/include/odp_ring_internal.h:207:12: error: array subscript ‘’ is outside the bounds of an interior zero-length array ‘uint32_t[0]’ {aka ‘unsigned int[]’} [-Werror=zero-length-bounds]
207 | ring->data[new_head & mask] = data;
| ~~~~~~~~~~^~~~~~~~~~~~~~~~~
../../platform/linux-generic/include/odp_ring_internal.h:48:11: note: while referencing ‘data’
48 | uint32_t data[0];
| ^~~~
odp_schedule_basic.c: In function ‘schedule_sched_queue’:
../../platform/linux-generic/include/odp_ring_internal.h:207:12: error: array subscript ‘’ is outside the bounds of an interior zero-length array ‘uint32_t[0]’ {aka ‘unsigned int[]’} [-Werror=zero-length-bounds]
207 | ring->data[new_head & mask] = data;
| ~~~~~~~~~~^~~~~~~~~~~~~~~~~
../../platform/linux-generic/include/odp_ring_internal.h:48:11: note: while referencing ‘data’
48 | uint32_t data[0];
| ^~~~
odp_schedule_basic.c: In function ‘do_schedule’:
../../platform/linux-generic/include/odp_ring_internal.h:133:20: error: array subscript ‘’ is outside the bounds of an interior zero-length array ‘uint32_t[0]’ {aka ‘unsigned int[]’} [-Werror=zero-length-bounds]
133 | *data = ring->data[new_head & mask];
| ~~~~~~~~~~^~~~~~~~~~~~~~~~~
../../platform/linux-generic/include/odp_ring_internal.h:48:11: note: while referencing ‘data’
48 | uint32_t data[0];
| ^~~~
../../platform/linux-generic/include/odp_ring_internal.h:207:12: error: array subscript ‘’ is outside the bounds of an interior zero-length array ‘uint32_t[0]’ {aka ‘unsigned int[]’} [-Werror=zero-length-bounds]
207 | ring->data[new_head & mask] = data;
| ~~~~~~~~~~^~~~~~~~~~~~~~~~~
../../platform/linux-generic/include/odp_ring_internal.h:48:11: note: while referencing ‘data’
48 | uint32_t data[0];
| ^~~~
../../platform/linux-generic/include/odp_ring_internal.h:207:12: error: array subscript ‘’ is outside the bounds of an interior zero-length array ‘uint32_t[0]’ {aka ‘unsigned int[]’} [-Werror=zero-length-bounds]
207 | ring->data[new_head & mask] = data;
| ~~~~~~~~~~^~~~~~~~~~~~~~~~~
../../platform/linux-generic/include/odp_ring_internal.h:48:11: note: while referencing ‘data’
48 | uint32_t data[0];
| ^~~~
../../platform/linux-generic/include/odp_ring_internal.h:207:12: error: array subscript ‘’ is outside the bounds of an interior zero-length array ‘uint32_t[0]’ {aka ‘unsigned int[]’} [-Werror=zero-length-bounds]
207 | ring->data[new_head & mask] = data;
| ~~~~~~~~~~^~~~~~~~~~~~~~~~~
../../platform/linux-generic/include/odp_ring_internal.h:48:11: note: while referencing ‘data’
48 | uint32_t data[0];
| ^~~~
cc1: all warnings being treated as errors
make[1]: *** [Makefile:1339: odp_schedule_basic.lo] Error 1

scalable scheduler does not respect timers.

All schedulers except the scalable scheduler seem to respect scheduled timer queues.
Is this by design or mistake ?

The below diff seem to "fix" the functional issue, please note that I have not measured the effects on scalability for this diff.


--- a/platform/linux-generic/odp_schedule_scalable.c
+++ b/platform/linux-generic/odp_schedule_scalable.c
@@ -23,6 +23,7 @@
 #include <odp_debug_internal.h>
 #include <odp_ishm_internal.h>
 #include <odp_ishmpool_internal.h>
+#include <odp_timer_internal.h>
 
 #include <odp_align_internal.h>
 #include <odp_buffer_inlines.h>
@@ -889,6 +890,8 @@ static int _schedule(odp_queue_t *from, odp_event_t ev[], int num_evts)
        ts = sched_ts;
        atomq = ts->atomq;
 
+       timer_run();
+
        /* Once an atomic queue has been scheduled to a thread, it will stay
         * on that thread until empty or 'rotated' by WRR
         */

Netmap zero-copy input is not supported

ODP does two memory-copies when used with netmap: memory copy from netmap input buffers to odp_packet_t buffers and memory copy from odp_packet_t buffers to netmap output buffers.

I asked the netmap developers whether freeing buffers in a different order than allocating them is supported, and yes it is: luigirizzo/netmap#475

However, the input-side copy is unnecessary: ODP packets could perhaps refer directly to the netmap buffers. This requires one to manipulate buf_idx when returning buffers back to the kernel, and also possibly set NS_BUF_CHANGED if buf_idx changed. This might also require one to have two internal implementatons of odp_packet_t data types: ones that were allocated by receiving packets (and thus refer to netmap memory), and ones that were allocated entirely within userspace (and thus refer to separately allocated userspace memory). I guess with clever design, ODP API does not need to change, and the two internal implementations of odp_packet_t would be just an internal implementation detail.

I'll see if I have time to do a prototype of this in LDP (https://github.com/jmtilli/pptk/tree/master/ldp) which currently has zero-copy input but requires one to deallocate packets in the same order they were allocated.

One potential problem is that unlike with LDP where you have to specify an input queue when freeing packets, and the specified input queue will then get the freed packets, in ODP the odp_packet_free function does not take as a parameter an input queue. If the packet data structure is modified to hold a pointer to the input queue from which it was received, it is possible to have a pointer to the input queue, but then the thread freeing the packet may be different than the thread that received the packet, meaning a mutex lock is required for netmap input and also for freeing packets. So there may still be some roadblocks before this can be implemented. I initially designed LDP to interface to netmap as well as possible, so LDP doesn't have similar roadblocks.

@MatiasElo, as the implementer of the netmap pktio, take a note of this issue. I guess the zero-copy input could improve ODP performance much, especially with VALE, perhaps close to the levels offered by LDP.

SIGSEGV: example/sysinfo (raspberry Pi 3B+, arm7, odp-linux, gcc 8.2.0)

ODP compiled on raspberry Pi 3 B+; gnu gcc 8.2.0; rasbian 32 bits;

the reason of sigsegv: the usage of printf formater "lu" on type uint64_t instead of "llu"
in odp_ishm.c; function int _odp_ishm_term_local(void): lines 2040, 2052 and 2055.

odp configuration:
config_odp.txt

run of example/sysinfo/odp_sysinfo
[segmentation_sysinfo.txt]
(https://github.com/OpenDataPlane/odp/files/4054492/segmentation_sysinfo.txt)

gdb output:
gdb_sysinfo.txt

suggested solution:
odp_ishm.c.txt

ODP should be able to be built without libcrypto from OpenSSL

Currently, my understanding is that ODP always requires libcrypto from OpenSSL. At least in Tiger Moth LTS, I didn't see any option to turn off linking with libcrypto.

I see this as problematic for many reasons:

  • Not all ODP applications require cryptography. Some may want to use ODP only for packet I/O without cryptography.
  • OpenSSL has a huge static link size. Dynamic linking, on the other hand, easily leads to version incompatibilities.
  • The security history of OpenSSL is very terrible. There are better alternatives such as BearSSL, but dependency on BearSSL would not be much better, since OpenSSL is often installed and BearSSL is rarely installed as default.
  • Cryptography often has export/import restrictions. If shipping a software using ODP linked with libcrypto across country borders, it will be exported from one country and imported to another country.
  • The license of OpenSSL is extremely problematic, unlike the license of ODP. (But, as a counterexample, the license of libconfig is even more problematic.)

Therefore, it should be considered if it's feasible to allow building ODP without linking to libcrypto. If this is done, there should be a ./configure option that allows turning off the linking even if OpenSSL is installed (like it usually is).

bus error

Sometimes, when my ODP application exits and restarts it, it will encounter bus error. As long as this phenomenon occurs, it will not disappear unless the Linux OS is restarted. If the Linux OS is not restarted, the bus error will appear every time the application is started.

Although restarting Linux OS can temporarily avoid this problem, after a few days, it will appear again.

Can I get any help from here? Thank you.

Linux CentOS 3.10.0-514-el7.x86_64
image

IPsec extended sequence number support is missing

Janne Peltonen 2018-09-07 13:45:53 UTC
The IPsec API supports extended sequence numbers but the underlying implementation does not, even though there is no capability flag that would allow the implementation to not support them.

Comment 1 Bill Fischofer 2018-09-07 13:47:37 UTC
Dmitry, can you take a look at this and comment?

Comment 2 Dmitry Eremin-Solenikov 2018-09-11 10:32:42 UTC
ESN is a tricky part of the standard, especially "retry the high bits". It does not play well with the ODP crypto part. I will work on implementing ESN support, but it will take time.

Related question: do we expect that all hardware that implements IPsec support will also have ESN support or do we need an ESN capability?

Comment 3 Bill Fischofer 2018-09-11 10:55:36 UTC
Per RFC 4303, ESNs are optional in IPsec and must be negotiated by IKE. The odp_ipsec_sa_opt_t has an esn bit, requesting that the SA be created with ESN support. The expected behavior is for the odp_ipsec_sa_create() call to fail if the underlying implementation does not support ESN.

So the first question is, since we currently don't support ESN are requests to create ESN-enabled SAs being failed? If not, that's certainly a bug. If they are then this isn't a bug per se, but rather a request to add support for this optional feature.

Comment 4 Bill Fischofer 2018-09-11 10:56:04 UTC
Sorry, make that RFC 4304 in the above comment.

Comment 5 Janne Peltonen 2018-09-11 12:11:05 UTC
Currently odp_ipsec_sa_create() silently ignores the esn flag and creates an SA with a regular sequence number.

Comment 6 Dmitry Eremin-Solenikov 2018-09-11 12:12:06 UTC
Created https://bugs.linaro.org/show_bug.cgi?id=4002 to track ESN-rejection

Clang build fails on Ubuntu 18.04

Petri Savolainen 2018-10-26 10:45:52 UTC

Making all in miscellaneous
make[2]: Entering directory 'odp/test/miscellaneous'
  CXX      odp_api_from_cpp.o
  CXXLD    odp_api_from_cpp
/usr/bin/ld: odp/lib/.libs/libodp-linux.a(odp_impl.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: odp/lib/.libs/libodp-linux.a(odp_version.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a PIE object; recompile with -fPIC
/usr/bin/ld: final link failed: Nonrepresentable section on output
collect2: error: ld returned 1 exit status
Makefile:734: recipe for target 'odp_api_from_cpp' failed

Ubuntu 18.04, kernel 4.15.0-38-generic
clang version 6.0.0-1ubuntu2 (tags/RELEASE_600/final)

Just build with defaults:

git clean -xdf
./bootstrap
./configure CC=clang
make

For some reason clang build passes on Travis. May be the ubuntu 18.04 docker image is not up-to-date.
Comment 1 Petri Savolainen 2018-10-26 10:59:58 UTC
Build passes with:

./configure CC=clang CFLAGS=-fPIC

So, may be default C flags are different in different Ubuntu / clang versions.

Comment 2 Bill Fischofer 2018-11-14 02:27:06 UTC
This has been a known issue for some time. C++ in clang now needs -fPIC to resolve. This seems to be a feature of the latest versions of clang.

Comment 3 Dmitry Eremin-Solenikov 2018-11-19 04:47:08 UTC
Could you please try checking if -fPIE is enough?

Comment 4 Petri Savolainen 2018-11-19 12:34:13 UTC
Yes, -fPIE helps.

./configure CC=clang CFLAGS=-fPIE

IPsec SA lookup may leave extra SAs locked

If an application has created more than one SA with the same SPI and with ODP_IPSEC_LOOKUP_SPI lookup mode, _odp_ipsec_sa_lookup() matches the last SA it sees but leaves all of them locked.

Such SA configuration does not seem to make much sense but is allowed in the ODP API if the application sets the spi_overlap config bit.

I suggest adding this in the ODP_IPSEC_LOOKUP_SPI branch as well:
if (NULL != best)
_odp_ipsec_sa_unuse(best);

system tests failure on PPA builder

Dmitry Eremin-Solenikov 2018-09-03 10:00:55 UTC
Two system_test failures on both i386 and amd64 when using PPA autobuilder:

  Test: system_test_odp_cpu_hz_max ...FAILED
    1. system.c:309  - 0 < hz
  Test: system_test_odp_cpu_hz_max_id ...FAILED
    1. system.c:323  - 0 < hz
    2. system.c:323  - 0 < hz
    3. system.c:323  - 0 < hz

Comment 1 Dmitry Eremin-Solenikov 2018-09-04 08:49:31 UTC
It looks like on PPA machines /proc/cpuinfo is not fully compatible with what ODP expects:

processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 60
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
stepping	: 1
microcode	: 0x1
cpu MHz		: 2596.988
cache size	: 4096 KB
physical id	: 0
siblings	: 1
core id		: 0
cpu cores	: 1
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single ssbd ibrs ibpb kaiser tpr_shadow vnmi flexpriority ept vpid fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat
bugs		: cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf
bogomips	: 5193.97
clflush size	: 64
cache_alignment	: 64
address sizes	: 40 bits physical, 48 bits virtual
power management:

Missing AES-CCM ipsec test cases

Dmitry Eremin-Solenikov 2018-12-10 14:47:58 UTC
It looks like I didn't add AES-CCM test cases into IPsec test suite.
We should consider adding IPsec AES-CCM test vectors.

IPsec SA may be used before fully initialized

There appears to be thread synchronization problem in inbound IPsec SA creation when SA lookup offload is used.

odp_ipsec_sa_create() starts with ipsec_sa_reserve() that reserves a free SA entry and marks it non-free. At this point, before odp_ipsec_sa_create() have completed, another thread may invoke odp_ipsec_in() without an explicit SA, asking ODP to look up the SA.

odp_ipsec_in() indirectly calls _odp_ipsec_sa_lookup() which treats the SA that is still being created as a valid SA and, depending on how far the SA initialization has gone or what the leftover values in the SA are, may decide that the packet matches the SA. In that case subsequent steps would use the SA (e.g. invoke a crypto operation) before the SA creation is done (e.g. crypto session created).

There is no similar problem when SA lookup is not offloaded to ODP since an application is supposed to ensure that SA creation is complete and visible to another thread before using the SA in the other thread.

Build failing with GCC 9.2

[2020-02-28T20:18:45.319Z] In function 'parse_options',
[2020-02-28T20:18:45.319Z]     inlined from 'main' at odp_packet_gen.c:1253:6:
[2020-02-28T20:18:45.319Z] odp_packet_gen.c:292:4: error: 'strncpy' specified bound 24 equals destination size [-Werror=stringop-truncation]
[2020-02-28T20:18:45.320Z]   292 |    strncpy(test_options->ipv4_src_s, optarg,
[2020-02-28T20:18:45.320Z]       |    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[2020-02-28T20:18:45.320Z]   293 |     sizeof(test_options->ipv4_src_s));
[2020-02-28T20:18:45.320Z]       |     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[2020-02-28T20:18:45.320Z]   CC       odp_sched_pktio.o
[2020-02-28T20:18:45.586Z] cc1: all warnings being treated as errors

scheduler_test_wait_time failure on multiple architectures

Dmitry Eremin-Solenikov 2018-09-03 09:58:56 UTC
I'm observing occasional test failures on x86 (both i386 and amd64):

  Test: scheduler_test_wait_time ...FAILED
    1. scheduler.c:189  - odp_time_cmp(diff, upper_limit) <= 0

Comment 1 Dmitry Eremin-Solenikov 2018-09-04 07:23:55 UTC
Same environment (PPA builder) has generated following related test failure:

  Test: time_test_wait_ns ...Exceed upper limit: diff is 3022303549, upper_limit 2980000000
FAILED
    1. time.c:417  - CU_FAIL("Exceed upper limit\n")

Comment 2 Dmitry Eremin-Solenikov 2018-09-04 07:45:23 UTC

  Test: timer_test_sched_queue ...timer.c:261:timer_test_queue_type():
Timer pool parameters:
timer.c:262:timer_test_queue_type():  res_ns  20000000
timer.c:263:timer_test_queue_type():  min_tmo 100000000
timer.c:264:timer_test_queue_type():  max_tmo 1000000000000
timer.c:288:timer_test_queue_type():  period_ns 400000000
timer.c:289:timer_test_queue_type():  period_tick 20

timer.c:308:timer_test_queue_type():abs timer tick 20
timer.c:308:timer_test_queue_type():abs timer tick 40
timer.c:308:timer_test_queue_type():abs timer tick 60
timer.c:308:timer_test_queue_type():abs timer tick 80
timer.c:308:timer_test_queue_type():abs timer tick 100
timer.c:308:timer_test_queue_type():abs timer tick 120
timer.c:308:timer_test_queue_type():abs timer tick 140
timer.c:308:timer_test_queue_type():abs timer tick 160
timer.c:308:timer_test_queue_type():abs timer tick 180
timer.c:308:timer_test_queue_type():abs timer tick 200
odp_timer.c:883:timer_notify():
	1 ticks overrun on timer pool "timer_pool", timer resolution too high
timer.c:342:timer_test_queue_type():timeout tick 20, timeout period 443893442
timer.c:342:timer_test_queue_type():timeout tick 40, timeout period 427907253
timer.c:342:timer_test_queue_type():timeout tick 60, timeout period 398719066
timer.c:342:timer_test_queue_type():timeout tick 80, timeout period 405298500
timer.c:342:timer_test_queue_type():timeout tick 100, timeout period 461499062
timer.c:342:timer_test_queue_type():timeout tick 120, timeout period 393089612
timer.c:342:timer_test_queue_type():timeout tick 140, timeout period 399983611
timer.c:342:timer_test_queue_type():timeout tick 160, timeout period 389533801
timer.c:342:timer_test_queue_type():timeout tick 180, timeout period 441990381
timer.c:342:timer_test_queue_type():timeout tick 200, timeout period 532432720
timer.c:352:timer_test_queue_type():test period 4294347448
FAILED
    1. timer.c:338  - diff_period < (period_ns + (4 * res_ns))

Comment 3 Dmitry Eremin-Solenikov 2018-09-04 11:04:30 UTC
Got same issue on arm64:

  Test: scheduler_test_wait_time ...FAILED
    1. scheduler.c:189  - odp_time_cmp(diff, upper_limit) <= 0

rte_mempool_ops_alloc() is not dpdk api

rte_mempool_ops_alloc() is used by dpdk --zero-copy but is missing in api file:
./lib/librte_mempool/rte_mempool_version.map
for Ubuntu 18.04 dpdk-17.11.2.
In that case we can not build zero copy odp with recompiled dpdk (apt-get install dpdk).

Modular Framework: Align subsystem's data plane function pointers on cache line

The data plane function pointers in the subsystem's module class need to be aligned at the cache line boundary to avoid loading unwanted data (unwanted for data plane) into the cache line. For ex: consider

typedef ODP_MODULE_CLASS(buffer) {
odp_module_base_t base;

odp_api_proto(buffer, buffer_from_event) buffer_from_event;
odp_api_proto(buffer, buffer_to_event) buffer_to_event;
odp_api_proto(buffer, buffer_addr) buffer_addr;
odp_api_proto(buffer, buffer_alloc_multi) buffer_alloc_multi;
odp_api_proto(buffer, buffer_free_multi) buffer_free_multi;
odp_api_proto(buffer, buffer_alloc) buffer_alloc;
odp_api_proto(buffer, buffer_free) buffer_free;
odp_api_proto(buffer, buffer_size) buffer_size;
odp_api_proto(buffer, buffer_is_valid) buffer_is_valid;
odp_api_proto(buffer, buffer_pool) buffer_pool;
odp_api_proto(buffer, buffer_print) buffer_print;
odp_api_proto(buffer, buffer_to_u64) buffer_to_u64;

} odp_buffer_module_t;

the function pointer 'buffer_from_event' needs to be aligned on the cache line boundary.

We could also place the control plane functions of the subsystem immediately after 'base' followed by the data plane functions which can be aligned on the cache line boundary.

webhook for auth is not working

github scripts stopped working due to missing autorization in scripts:
https://github.com/muvarov/githubscripts
for example:
gh-hook-mr.py

It's not clear what actually goes on but now I get following error:
Traceback (most recent call last):
File "gh-hook-mr.py", line 131, in
issue.edit(title="[PATCH v%d] %s" % (version, title))
File "/usr/local/lib/python2.7/dist-packages/github3/decorators.py", line 37, in auth_wrapper
raise error_for(r)
github3.exceptions.AuthenticationFailed: 401 Requires authentication

I.e. on
issue.edit(title="[PATCH v%d] %s" % (version, title))
Scripts errored with Authentication error.
I tried both:
gh = login(gh_login, password=gh_password)
or
gh = login(token="my tocken here")
results is the same.

2 factor auth is turned off.

seg[0].data MUST return to its initial value on odp_packet_reset

Liron 2018-08-19 09:03:02 UTC
once user perform manipulations on head such as 'odp_packet_push_head',
the had MUST return to its initial value on odp_packet_reset.

in older ODP versions (e.g. ODP1.11) the 'data' wasn't moved on 'push/pull' operations only the headroom.
in current versions (even in TigerMoth) both headroom and 'data' are being moved.
once calling to odp_packet_reset only the headroom return to its initial value, but not the 'data' pointer.

Comment 1 Bill Fischofer 2018-08-19 11:25:16 UTC
Thanks for the report Liron. Petri, please review and comment on this.

Comment 2 Maxim Uvarov 2018-08-23 12:28:17 UTC
data pointer needs to be cleared, bug needs to be checked.

Comment 3 Bill Fischofer 2018-10-25 12:17:22 UTC
Liron, is this still an issue for you? We've recently added additional tests to cover this case. If you're still having problems, please elaborate. Thanks.

Comment 4 Liron 2018-10-28 08:26:05 UTC
I noticed that you added the 'reset_seg' function that handle this issue.
but I think you should move it to be under packet_init as many internal functions calls directly to this function and not to odp_packet_reset

error when compiling odp

Hello. When I execute "./configure" beneath the odp-master path, it shows errors below:

checking for __int128... yes
checking whether -latomic is needed for 128-bit atomic built-ins... yes
checking for __atomic_exchange_16 in -latomic... no
configure: error: in `/home/admin/Downloads/odp-master':
configure: error: __atomic_exchange_16 is not available

I have installed libatomic but the error still exists.
Does anyone have some ideas about how to solve this problem? Thanks.

ODP exposes symbols outside of odp*/_odp* namespace

Dmitry Eremin-Solenikov 2017-05-01 23:08:44 UTC
The following command reports several symbols that fall outside of odp namespace, but still are exported from the library. It would be good to limit ODP to proper namespace and to have such command as an additional distcheck test.

$ nm -A lib/.libs/libodp-linux.a  | grep -v ' [a-zU] ' | grep -v ' _\?odp'

lib/.libs/libodp-linux.a:_fdserver.o:0000000000000008 C client_lock
lib/.libs/libodp-linux.a:odp_bitmap.o:0000000000000620 T raw_bitmap_clear
lib/.libs/libodp-linux.a:odp_bitmap.o:00000000000005e0 T raw_bitmap_set
lib/.libs/libodp-linux.a:odp_bitmap.o:0000000000000560 T __sparse_bitmap_clear
lib/.libs/libodp-linux.a:odp_bitmap.o:00000000000005b0 T __sparse_bitmap_iterator
lib/.libs/libodp-linux.a:odp_bitmap.o:0000000000000520 T __sparse_bitmap_set
lib/.libs/libodp-linux.a:odp_bitmap.o:00000000000001f0 T __wapl_bitmap_and
lib/.libs/libodp-linux.a:odp_bitmap.o:0000000000000460 T __wapl_bitmap_clear
lib/.libs/libodp-linux.a:odp_bitmap.o:00000000000004f0 T __wapl_bitmap_iterator
lib/.libs/libodp-linux.a:odp_bitmap.o:0000000000000250 T __wapl_bitmap_or
lib/.libs/libodp-linux.a:odp_bitmap.o:0000000000000360 T __wapl_bitmap_set
lib/.libs/libodp-linux.a:odp_classification.o:0000000000000970 T alloc_pmr
lib/.libs/libodp-linux.a:odp_classification.o:0000000000001a30 T cls_classify_packet
lib/.libs/libodp-linux.a:odp_classification.o:0000000000000a50 T get_cos_entry
lib/.libs/libodp-linux.a:odp_classification.o:00000000000004e0 T get_cos_entry_internal
lib/.libs/libodp-linux.a:odp_classification.o:0000000000000a90 T get_pmr_entry
lib/.libs/libodp-linux.a:odp_classification.o:00000000000004f0 T get_pmr_entry_internal
lib/.libs/libodp-linux.a:odp_classification.o:0000000000001820 T match_pmr_cos
lib/.libs/libodp-linux.a:odp_classification.o:00000000000019c0 T match_qos_cos
lib/.libs/libodp-linux.a:odp_classification.o:0000000000001970 T match_qos_l2_cos
lib/.libs/libodp-linux.a:odp_classification.o:0000000000001910 T match_qos_l3_cos
lib/.libs/libodp-linux.a:odp_classification.o:00000000000018d0 T pktio_classifier_init
lib/.libs/libodp-linux.a:odp_classification.o:0000000000001800 T verify_pmr
lib/.libs/libodp-linux.a:odp_errno.o:0000000000000000 B __odp_errno
lib/.libs/libodp-linux.a:odp_packet.o:0000000000000530 T packet_alloc_multi
lib/.libs/libodp-linux.a:odp_packet.o:0000000000003b50 T packet_parse_common
lib/.libs/libodp-linux.a:odp_packet.o:0000000000003b70 T packet_parse_layer
lib/.libs/libodp-linux.a:odp_packet.o:00000000000004f0 T packet_parse_reset
lib/.libs/libodp-linux.a:odp_packet_io.o:00000000000034f0 T pktin_deq_multi
lib/.libs/libodp-linux.a:odp_packet_io.o:0000000000003160 T pktin_dequeue
lib/.libs/libodp-linux.a:odp_packet_io.o:0000000000000110 T pktin_enq_multi
lib/.libs/libodp-linux.a:odp_packet_io.o:00000000000000d0 T pktin_enqueue
lib/.libs/libodp-linux.a:odp_packet_io.o:0000000000000200 C pktio_entry_ptr
lib/.libs/libodp-linux.a:odp_packet_io.o:0000000000000008 C pktio_tbl
lib/.libs/libodp-linux.a:odp_packet_io.o:0000000000000090 T pktout_deq_multi
lib/.libs/libodp-linux.a:odp_packet_io.o:0000000000000050 T pktout_dequeue
lib/.libs/libodp-linux.a:odp_packet_io.o:0000000000003ad0 T pktout_enq_multi
lib/.libs/libodp-linux.a:odp_packet_io.o:0000000000003a80 T pktout_enqueue
lib/.libs/libodp-linux.a:odp_packet_io.o:0000000000000c50 T sched_cb_num_pktio
lib/.libs/libodp-linux.a:odp_packet_io.o:00000000000032d0 T sched_cb_pktin_poll
lib/.libs/libodp-linux.a:odp_packet_io.o:0000000000000b90 T sched_cb_pktio_stop_finalize
lib/.libs/libodp-linux.a:odp_packet_io.o:0000000000003b50 T single_capability
lib/.libs/libodp-linux.a:ethtool.o:0000000000000000 T ethtool_stats_get_fd
lib/.libs/libodp-linux.a:io_ops.o:0000000000000000 D pktio_if_ops
lib/.libs/libodp-linux.a:ipc.o:0000000000000000 D ipc_pktio_ops
lib/.libs/libodp-linux.a:pktio_common.o:0000000000000160 T sock_stats_fd
lib/.libs/libodp-linux.a:pktio_common.o:0000000000000000 T sock_stats_reset_fd
lib/.libs/libodp-linux.a:loop.o:0000000000000000 D loopback_pktio_ops
lib/.libs/libodp-linux.a:socket.o:00000000000011c0 T link_status_fd
lib/.libs/libodp-linux.a:socket.o:0000000000000ae0 T mac_addr_get_fd
lib/.libs/libodp-linux.a:socket.o:0000000000000bc0 T mtu_get_fd
lib/.libs/libodp-linux.a:socket.o:0000000000001110 T promisc_mode_get_fd
lib/.libs/libodp-linux.a:socket.o:0000000000000fe0 T promisc_mode_set_fd
lib/.libs/libodp-linux.a:socket.o:0000000000001270 T rss_conf_get_fd
lib/.libs/libodp-linux.a:socket.o:0000000000001900 T rss_conf_get_supported_fd
lib/.libs/libodp-linux.a:socket.o:0000000000001dc0 T rss_conf_print
lib/.libs/libodp-linux.a:socket.o:0000000000001570 T rss_conf_set_fd
lib/.libs/libodp-linux.a:socket.o:0000000000000880 W sendmmsg
lib/.libs/libodp-linux.a:socket.o:0000000000000000 D sock_mmsg_pktio_ops
lib/.libs/libodp-linux.a:socket_mmap.o:0000000000000000 D sock_mmap_pktio_ops
lib/.libs/libodp-linux.a:sysfs.o:0000000000000110 T sysfs_stats
lib/.libs/libodp-linux.a:tap.o:0000000000000000 D tap_pktio_ops
lib/.libs/libodp-linux.a:ring.o:0000000000000b20 T _ring_count
lib/.libs/libodp-linux.a:ring.o:0000000000000030 T _ring_create
lib/.libs/libodp-linux.a:ring.o:0000000000000e40 T _ring_dequeue_burst
lib/.libs/libodp-linux.a:ring.o:00000000000001f0 T _ring_destroy
lib/.libs/libodp-linux.a:ring.o:0000000000000b50 T _ring_dump
lib/.libs/libodp-linux.a:ring.o:0000000000000b00 T _ring_empty
lib/.libs/libodp-linux.a:ring.o:0000000000000e00 T _ring_enqueue_burst
lib/.libs/libodp-linux.a:ring.o:0000000000000b30 T _ring_free_count
lib/.libs/libodp-linux.a:ring.o:0000000000000ae0 T _ring_full
lib/.libs/libodp-linux.a:ring.o:0000000000000d40 T _ring_list_dump
lib/.libs/libodp-linux.a:ring.o:0000000000000d80 T _ring_lookup
lib/.libs/libodp-linux.a:ring.o:0000000000000ac0 T _ring_mc_dequeue_bulk
lib/.libs/libodp-linux.a:ring.o:0000000000000e20 T _ring_mc_dequeue_burst
lib/.libs/libodp-linux.a:ring.o:00000000000006f0 T ___ring_mc_do_dequeue
lib/.libs/libodp-linux.a:ring.o:00000000000002a0 T ___ring_mp_do_enqueue
lib/.libs/libodp-linux.a:ring.o:0000000000000aa0 T _ring_mp_enqueue_bulk
lib/.libs/libodp-linux.a:ring.o:0000000000000de0 T _ring_mp_enqueue_burst
lib/.libs/libodp-linux.a:ring.o:0000000000000ad0 T _ring_sc_dequeue_bulk
lib/.libs/libodp-linux.a:ring.o:0000000000000e30 T _ring_sc_dequeue_burst
lib/.libs/libodp-linux.a:ring.o:00000000000008e0 T ___ring_sc_do_dequeue
lib/.libs/libodp-linux.a:ring.o:0000000000000280 T _ring_set_water_mark
lib/.libs/libodp-linux.a:ring.o:00000000000004e0 T ___ring_sp_do_enqueue
lib/.libs/libodp-linux.a:ring.o:0000000000000ab0 T _ring_sp_enqueue_bulk
lib/.libs/libodp-linux.a:ring.o:0000000000000df0 T _ring_sp_enqueue_burst
lib/.libs/libodp-linux.a:ring.o:0000000000000000 T _ring_tailq_init
lib/.libs/libodp-linux.a:odp_pool.o:0000000000000840 T buffer_alloc_multi
lib/.libs/libodp-linux.a:odp_pool.o:0000000000000af0 T buffer_free_multi
lib/.libs/libodp-linux.a:odp_pool.o:0000000000000008 C pool_tbl
lib/.libs/libodp-linux.a:odp_pool.o:0000000000001b10 T seg_alloc_tail
lib/.libs/libodp-linux.a:odp_pool.o:0000000000001b20 T seg_free_tail
lib/.libs/libodp-linux.a:odp_queue.o:00000000000006a0 T get_qentry
lib/.libs/libodp-linux.a:odp_queue.o:0000000000000580 T queue_deq
lib/.libs/libodp-linux.a:odp_queue.o:00000000000003a0 T queue_deq_multi
lib/.libs/libodp-linux.a:odp_queue.o:0000000000000220 T queue_enq
lib/.libs/libodp-linux.a:odp_queue.o:0000000000000000 T queue_enq_multi
lib/.libs/libodp-linux.a:odp_queue.o:0000000000001460 T queue_lock
lib/.libs/libodp-linux.a:odp_queue.o:00000000000014a0 T queue_unlock
lib/.libs/libodp-linux.a:odp_queue.o:00000000000016b0 T sched_cb_num_queues
lib/.libs/libodp-linux.a:odp_queue.o:0000000000001750 T sched_cb_queue_deq_multi
lib/.libs/libodp-linux.a:odp_queue.o:0000000000000dd0 T sched_cb_queue_destroy_finalize
lib/.libs/libodp-linux.a:odp_queue.o:0000000000001a30 T sched_cb_queue_empty
lib/.libs/libodp-linux.a:odp_queue.o:00000000000016e0 T sched_cb_queue_grp
lib/.libs/libodp-linux.a:odp_queue.o:0000000000001740 T sched_cb_queue_handle
lib/.libs/libodp-linux.a:odp_queue.o:0000000000001720 T sched_cb_queue_is_atomic
lib/.libs/libodp-linux.a:odp_queue.o:0000000000001700 T sched_cb_queue_is_ordered
lib/.libs/libodp-linux.a:odp_queue.o:00000000000016c0 T sched_cb_queue_prio
lib/.libs/libodp-linux.a:odp_schedule.o:0000000000000000 B sched_local
lib/.libs/libodp-linux.a:odp_schedule.o:0000000000000000 D schedule_default_api
lib/.libs/libodp-linux.a:odp_schedule.o:00000000000000a0 D schedule_default_fn
lib/.libs/libodp-linux.a:odp_schedule_if.o:0000000000000000 D sched_api
lib/.libs/libodp-linux.a:odp_schedule_if.o:0000000000000008 D sched_fn
lib/.libs/libodp-linux.a:odp_schedule_sp.o:0000000000000000 D schedule_sp_api
lib/.libs/libodp-linux.a:odp_schedule_sp.o:00000000000000a0 D schedule_sp_fn
lib/.libs/libodp-linux.a:odp_schedule_iquery.o:0000000000000000 D schedule_iquery_api
lib/.libs/libodp-linux.a:odp_schedule_iquery.o:00000000000000a0 D schedule_iquery_fn
lib/.libs/libodp-linux.a:odp_schedule_iquery.o:0000000000000000 B thread_local
lib/.libs/libodp-linux.a:odp_sysinfo_parse.o:0000000000000000 T cpuinfo_parser
lib/.libs/libodp-linux.a:pcap.o:0000000000000000 D pcap_pktio_ops

Comment 1 Bill Fischofer 2017-05-04 00:01:44 UTC
I'm not sure this is correct. You want to be looking at the .so file, not the .o files. The .o files have to have internal names visible so that these files can be linked to produce the .so. However look at the following:

nm -g libodp-linux.so

and you only see the external ODP API as visible symbols, in addition to non-ODP stuff like glibc and OpenSSL symbols that we don't control.

Comment 2 Dmitry Eremin-Solenikov 2017-05-04 08:56:14 UTC
Bill, I'm looking onto the archive (libodp-linux.a), which is a way to link ODP with an app. And archive does export those symbols, because there is no way to limit visibility with it.

Comment 3 Bill Fischofer 2017-05-04 13:30:53 UTC
So is this really a bug then? The original visibility changes were intended to cover .so files.

Comment 4 Dmitry Eremin-Solenikov 2017-05-04 13:32:09 UTC
It is a bug, because static archive exports those names. I'd suggest to just rename respective symbols.

Comment 5 Bill Fischofer 2017-06-22 14:53:10 UTC
Dmitry, can you suggest a patch to address this?

Comment 6 Bill Fischofer 2017-08-03 14:52:49 UTC
Ping to Dmitry. Is this something we still want to address?

Comment 7 Dmitry Eremin-Solenikov 2017-08-04 07:39:18 UTC
Yes, I will take a look.

Comment 8 Bill Fischofer 2017-08-17 14:40:48 UTC
PR #108 merged.

Comment 9 Dmitry Eremin-Solenikov 2017-08-17 20:42:27 UTC
Reopened. The bug is still not fully sorted. I just fixed some low-hanging fruits for now.

$ nm -A lib/.libs/libodp-linux.a  | grep -v ' [a-zU] ' | grep -v ' _\?odp'
lib/.libs/libodp-linux.a:odp_bitmap.o:0000000000000620 T raw_bitmap_clear
lib/.libs/libodp-linux.a:odp_bitmap.o:00000000000005e0 T raw_bitmap_set
lib/.libs/libodp-linux.a:odp_bitmap.o:0000000000000560 T __sparse_bitmap_clear
lib/.libs/libodp-linux.a:odp_bitmap.o:00000000000005b0 T __sparse_bitmap_iterator
lib/.libs/libodp-linux.a:odp_bitmap.o:0000000000000520 T __sparse_bitmap_set
lib/.libs/libodp-linux.a:odp_bitmap.o:00000000000001f0 T __wapl_bitmap_and
lib/.libs/libodp-linux.a:odp_bitmap.o:0000000000000460 T __wapl_bitmap_clear
lib/.libs/libodp-linux.a:odp_bitmap.o:00000000000004f0 T __wapl_bitmap_iterator
lib/.libs/libodp-linux.a:odp_bitmap.o:0000000000000250 T __wapl_bitmap_or
lib/.libs/libodp-linux.a:odp_bitmap.o:0000000000000360 T __wapl_bitmap_set
lib/.libs/libodp-linux.a:odp_classification.o:00000000000018d0 T cls_classify_packet
lib/.libs/libodp-linux.a:odp_classification.o:0000000000001890 T pktio_classifier_init
lib/.libs/libodp-linux.a:odp_errno.o:0000000000000000 B __odp_errno
lib/.libs/libodp-linux.a:odp_packet.o:0000000000000550 T packet_alloc_multi
lib/.libs/libodp-linux.a:odp_packet.o:00000000000062e0 T packet_parse_common
lib/.libs/libodp-linux.a:odp_packet.o:0000000000006300 T packet_parse_layer
lib/.libs/libodp-linux.a:odp_packet.o:0000000000000500 T packet_parse_reset
lib/.libs/libodp-linux.a:odp_packet_io.o:0000000000000200 C pktio_entry_ptr
lib/.libs/libodp-linux.a:odp_packet_io.o:0000000000003280 T sched_cb_pktin_poll
lib/.libs/libodp-linux.a:odp_packet_io.o:0000000000000b70 T sched_cb_pktio_stop_finalize
lib/.libs/libodp-linux.a:ethtool.o:0000000000000000 T ethtool_stats_get_fd
lib/.libs/libodp-linux.a:io_ops.o:0000000000000000 D pktio_if_ops
lib/.libs/libodp-linux.a:ipc.o:0000000000000000 D ipc_pktio_ops
lib/.libs/libodp-linux.a:pktio_common.o:0000000000000160 T sock_stats_fd
lib/.libs/libodp-linux.a:pktio_common.o:0000000000000000 T sock_stats_reset_fd
lib/.libs/libodp-linux.a:loop.o:0000000000000000 D loopback_pktio_ops
lib/.libs/libodp-linux.a:socket.o:0000000000000fc0 T link_status_fd
lib/.libs/libodp-linux.a:socket.o:0000000000000930 T mac_addr_get_fd
lib/.libs/libodp-linux.a:socket.o:0000000000000a10 T mtu_get_fd
lib/.libs/libodp-linux.a:socket.o:0000000000000f10 T promisc_mode_get_fd
lib/.libs/libodp-linux.a:socket.o:0000000000000de0 T promisc_mode_set_fd
lib/.libs/libodp-linux.a:socket.o:0000000000001070 T rss_conf_get_fd
lib/.libs/libodp-linux.a:socket.o:0000000000001810 T rss_conf_get_supported_fd
lib/.libs/libodp-linux.a:socket.o:0000000000001de0 T rss_conf_print
lib/.libs/libodp-linux.a:socket.o:0000000000001480 T rss_conf_set_fd
lib/.libs/libodp-linux.a:socket.o:00000000000006c0 W sendmmsg
lib/.libs/libodp-linux.a:socket.o:0000000000000000 D sock_mmsg_pktio_ops
lib/.libs/libodp-linux.a:socket_mmap.o:0000000000000000 D sock_mmap_pktio_ops
lib/.libs/libodp-linux.a:sysfs.o:0000000000000110 T sysfs_stats
lib/.libs/libodp-linux.a:tap.o:0000000000000000 D tap_pktio_ops
lib/.libs/libodp-linux.a:ring.o:0000000000000b20 T _ring_count
lib/.libs/libodp-linux.a:ring.o:0000000000000030 T _ring_create
lib/.libs/libodp-linux.a:ring.o:0000000000000e40 T _ring_dequeue_burst
lib/.libs/libodp-linux.a:ring.o:00000000000001f0 T _ring_destroy
lib/.libs/libodp-linux.a:ring.o:0000000000000b50 T _ring_dump
lib/.libs/libodp-linux.a:ring.o:0000000000000b00 T _ring_empty
lib/.libs/libodp-linux.a:ring.o:0000000000000e00 T _ring_enqueue_burst
lib/.libs/libodp-linux.a:ring.o:0000000000000b30 T _ring_free_count
lib/.libs/libodp-linux.a:ring.o:0000000000000ae0 T _ring_full
lib/.libs/libodp-linux.a:ring.o:0000000000000d40 T _ring_list_dump
lib/.libs/libodp-linux.a:ring.o:0000000000000d80 T _ring_lookup
lib/.libs/libodp-linux.a:ring.o:0000000000000ac0 T _ring_mc_dequeue_bulk
lib/.libs/libodp-linux.a:ring.o:0000000000000e20 T _ring_mc_dequeue_burst
lib/.libs/libodp-linux.a:ring.o:00000000000006f0 T ___ring_mc_do_dequeue
lib/.libs/libodp-linux.a:ring.o:00000000000002a0 T ___ring_mp_do_enqueue
lib/.libs/libodp-linux.a:ring.o:0000000000000aa0 T _ring_mp_enqueue_bulk
lib/.libs/libodp-linux.a:ring.o:0000000000000de0 T _ring_mp_enqueue_burst
lib/.libs/libodp-linux.a:ring.o:0000000000000ad0 T _ring_sc_dequeue_bulk
lib/.libs/libodp-linux.a:ring.o:0000000000000e30 T _ring_sc_dequeue_burst
lib/.libs/libodp-linux.a:ring.o:00000000000008e0 T ___ring_sc_do_dequeue
lib/.libs/libodp-linux.a:ring.o:0000000000000280 T _ring_set_water_mark
lib/.libs/libodp-linux.a:ring.o:00000000000004e0 T ___ring_sp_do_enqueue
lib/.libs/libodp-linux.a:ring.o:0000000000000ab0 T _ring_sp_enqueue_bulk
lib/.libs/libodp-linux.a:ring.o:0000000000000df0 T _ring_sp_enqueue_burst
lib/.libs/libodp-linux.a:ring.o:0000000000000000 T _ring_tailq_init
lib/.libs/libodp-linux.a:odp_pool.o:0000000000000810 T buffer_alloc_multi
lib/.libs/libodp-linux.a:odp_pool.o:0000000000000aa0 T buffer_free_multi
lib/.libs/libodp-linux.a:odp_pool.o:0000000000000008 C pool_tbl
lib/.libs/libodp-linux.a:odp_queue.o:0000000000000080 D queue_default_api
lib/.libs/libodp-linux.a:odp_queue.o:0000000000000000 D queue_default_fn
lib/.libs/libodp-linux.a:odp_queue.o:0000000000001470 T sched_cb_queue_deq_multi
lib/.libs/libodp-linux.a:odp_queue.o:00000000000013d0 T sched_cb_queue_destroy_finalize
lib/.libs/libodp-linux.a:odp_queue.o:00000000000016a0 T sched_cb_queue_empty
lib/.libs/libodp-linux.a:odp_queue.o:0000000000001460 T sched_cb_queue_handle
lib/.libs/libodp-linux.a:odp_queue_if.o:0000000000000008 D queue_api
lib/.libs/libodp-linux.a:odp_queue_if.o:0000000000000000 D queue_fn
lib/.libs/libodp-linux.a:odp_schedule.o:0000000000000000 D schedule_default_api
lib/.libs/libodp-linux.a:odp_schedule.o:00000000000000a0 D schedule_default_fn
lib/.libs/libodp-linux.a:odp_schedule_if.o:0000000000000000 D sched_api
lib/.libs/libodp-linux.a:odp_schedule_if.o:0000000000000008 D sched_fn
lib/.libs/libodp-linux.a:odp_schedule_sp.o:0000000000000000 D schedule_sp_api
lib/.libs/libodp-linux.a:odp_schedule_sp.o:00000000000000a0 D schedule_sp_fn
lib/.libs/libodp-linux.a:odp_schedule_iquery.o:0000000000000000 D schedule_iquery_api
lib/.libs/libodp-linux.a:odp_schedule_iquery.o:00000000000000a0 D schedule_iquery_fn
lib/.libs/libodp-linux.a:cpu_flags.o:0000000000000080 T cpu_flags_print_all
lib/.libs/libodp-linux.a:cpu_flags.o:00000000000001f0 T cpu_has_global_time
lib/.libs/libodp-linux.a:odp_cpu_arch.o:0000000000000030 T cpu_global_time
lib/.libs/libodp-linux.a:odp_cpu_arch.o:0000000000000040 T cpu_global_time_freq
lib/.libs/libodp-linux.a:odp_sysinfo_parse.o:0000000000000000 T cpuinfo_parser
lib/.libs/libodp-linux.a:odp_sysinfo_parse.o:00000000000002e0 T sys_info_print_arch
lib/.libs/libodp-linux.a:pcap.o:0000000000000000 D pcap_pktio_ops

Comment 10 Bill Fischofer 2017-08-31 14:44:40 UTC
PR #108 resolves this (merged)

Comment 11 Dmitry Eremin-Solenikov 2017-08-31 14:46:30 UTC
Reopening again. Please verify binary before closing the issue next time.

Comment 12 Bill Fischofer 2017-09-25 19:28:27 UTC
MUSTFIX for Tiger Moth

Comment 13 Bill Fischofer 2017-11-09 15:51:45 UTC
Not critical for Tiger Moth. Can revisit this afterwards.

Comment 14 Bill Fischofer 2017-12-07 15:40:40 UTC
Ping to Dmitry. Do we still want to pursue this?

Comment 15 Bill Fischofer 2017-12-21 15:40:08 UTC
Will review as part of Tiger Moth RC2.

Comment 16 Bill Fischofer 2018-01-18 15:37:57 UTC
Ping to Dmitry. Do we still want to pursue this for Tiger Moth RC2?

Comment 17 Dmitry Eremin-Solenikov 2018-01-18 20:42:08 UTC
Did not have time yet. It requires shooting them one by one.

Comment 18 Bill Fischofer 2018-04-05 13:19:22 UTC
This is a "nice to have" that we'll look at more post-Tiger Moth.

Modular framework: Simplify calling the APIs of an active module

With the current modular framework, the public APIs of the active module are called as shown below:

int odp_queue_term_global(void)
{
odp_queue_module_t *mod =
odp_subsystem_active_module(queue, mod);

ODP_ASSERT(mod);
ODP_ASSERT(mod->base.term_global);

return mod->base.term_global();

}

This can be improved further in the following ways:

  1. Provide a macro/API to replace the body
  2. The macro 'odp_subsystem_active_module' results in accessing the structure 'odp_subsystem_t', loading a lot of data which is not required in data plane. It would be good to store the 'active' module pointer outside of the 'odp_subsystem_t' structure so that in the data plane, unwanted data is not loaded in the cache.
    This can be extended further to club the active modules of all ODP components in an array.

Traffic manager validation test fails randomly on ThunderX2

While running ODP linux-generic on ThunderX2 platform (Ubuntu 16.04 - 4.13.0-19.22-generic) the traffic manager validation test fails quite randomly (~20% of runs).

Collection of errors from failed runs:

  Test: traffic_mngr_test_shaper ...traffic_mngr.c:2531:test_shaper_bw():min=1 avg_rcv_gap=6125 max=13473 std_dev_gap=4982
traffic_mngr.c:2534:test_shaper_bw():  expected_rcv_gap=10000 acceptable rcv_gap range=7998..12502
traffic_mngr.c:2552:test_shaper_bw():agv_rcv_gap=6125 acceptable rcv_gap range=7998..12502
FAILED
    1. traffic_mngr.c:3854  - !odp_cunit_ret(test_shaper_bw("bw1", "node_1_1_1", 0, MBPS * 1))


  Test: traffic_mngr_test_shaper ...traffic_mngr.c:2531:test_shaper_bw():min=1 avg_rcv_gap=1941 max=3999 std_dev_gap=1834
traffic_mngr.c:2534:test_shaper_bw():  expected_rcv_gap=1000 acceptable rcv_gap range=798..1252
traffic_mngr.c:2552:test_shaper_bw():agv_rcv_gap=1941 acceptable rcv_gap range=798..1252
traffic_mngr.c:2559:test_shaper_bw():std_dev_gap=1834 >  expected_rcv_gap_us=1000
FAILED
    1. traffic_mngr.c:3862  - !odp_cunit_ret(test_shaper_bw("bw10", "node_1_1_1", 2, 10 * MBPS))


  Test: traffic_mngr_test_shaper ...traffic_mngr.c:2531:test_shaper_bw():min=3996 avg_rcv_gap=18023 max=51818 std_dev_gap=12944
traffic_mngr.c:2534:test_shaper_bw():  expected_rcv_gap=10000 acceptable rcv_gap range=7998..12502
traffic_mngr.c:2552:test_shaper_bw():agv_rcv_gap=18023 acceptable rcv_gap range=7998..12502
traffic_mngr.c:2559:test_shaper_bw():std_dev_gap=12944 >  expected_rcv_gap_us=10000
traffic_mngr.c:2502:test_shaper_bw():Sent 50 pkts but only 19 came back
traffic_mngr.c:2502:test_shaper_bw():Sent 50 pkts but only 20 came back
traffic_mngr.c:2502:test_shaper_bw():Sent 50 pkts but only 11 came back
FAILED
    1. traffic_mngr.c:3854  - !odp_cunit_ret(test_shaper_bw("bw1", "node_1_1_1", 0, MBPS * 1))
    2. traffic_mngr.c:3858  - !odp_cunit_ret(test_shaper_bw("bw4", "node_1_1_1", 1, 4 * MBPS))
    3. traffic_mngr.c:3862  - !odp_cunit_ret(test_shaper_bw("bw10", "node_1_1_1", 2, 10 * MBPS))
    4. traffic_mngr.c:3866  - !odp_cunit_ret(test_shaper_bw("bw40", "node_1_1_1", 3, 40 * MBPS))

Any ideas what could be causing this?

Timer API missing capabilities

ODP timer API doesn't currently provide capabilities for the following parameters used in odp_timer_pool_param_t:

  • Maximum number of timer pools
  • Maximum number of timers in a timer pool

./bootstrap throws warnings

honnag01@ubuntu:~/api-next/odp$ ./bootstrap

  • aclocal -I config -I m4
    ./test/linux-dpdk/m4/configure.m4:1: warning: file test/linux-generic/m4/performance.m4' included several times ../../lib/m4sugar/m4sh.m4:639: AS_IF is expanded from... ./test/linux-dpdk/m4/configure.m4:1: the top level ./test/linux-dpdk/m4/configure.m4:1: warning: file test/linux-generic/m4/performance.m4' included several times
    ../../lib/m4sugar/m4sh.m4:639: AS_IF is expanded from...
    ./test/linux-dpdk/m4/configure.m4:1: the top level
  • libtoolize --copy
  • autoheader
    ./test/linux-dpdk/m4/configure.m4:1: warning: file `test/linux-generic/m4/performance.m4' included several times
    ../../lib/m4sugar/m4sh.m4:639: AS_IF is expanded from...
    ./test/linux-dpdk/m4/configure.m4:1: the top level
  • automake --add-missing --copy --warnings=all
    ./test/linux-dpdk/m4/configure.m4:1: warning: file `test/linux-generic/m4/performance.m4' included several times
    ../../lib/m4sugar/m4sh.m4:639: AS_IF is expanded from...
    ./test/linux-dpdk/m4/configure.m4:1: the top level
  • autoconf --force
    ./test/linux-dpdk/m4/configure.m4:1: warning: file `test/linux-generic/m4/performance.m4' included several times
    ../../lib/m4sugar/m4sh.m4:639: AS_IF is expanded from...
    ./test/linux-dpdk/m4/configure.m4:1: the top level

TODO: remove temporary flags for pktio modules linkage

In #139 (comment)

For each pktio ops module, to make sure it links into ODP application, it adds temporary flag variable in each module and set it in initial routine.

In the future with Makefile re-organization these variables would be removed and modules will be linked with --whole-archive or --no-as-needed linker options or dynamically loaded.

Low VALE performance

I have used ODP a lot with VALE and have found that performance is suboptimal when compared to what you can obtain when using netmap directly without ODP.

How to reproduce this:

scapy
>>> wrpcap("1414.pcap", Ether()/IP()/UDP()/((1400-20-8)*"\0"))

Now you have a file named 1414.pcap.

Then clone https://github.com/jmtilli/pptk and compile with netmap following the instructions of README.md. You need to edit opts.mk according to those instructions (the file opts.mk does not exist unless you have typed "make", which will create it as an empty file).

Now, I started the packet generator program with:

sudo ./pptk/netmap/netmapreplay vale0:0 1414.pcap

And the packet sink program with:

sudo ./pptk/netmap/netmaprecv vale1:0

And then compared my l2fwd application:

sudo ./pptk/netmap/netmapfwd vale0:1 vale1:1

With odp_l2fwd_simple:

sudo odp_l2fwd_simple vale0:1 vale1:1 01:02:03:04:05:06 07:08:09:0a:0b:0c

On the machine on which I'm currently writing this bug report, I have ODP v1.18.0.0 and pptk 5a549f14bb1ee0923568ca5b3236bbf0e0829a26.

I also made the following change to pptk to make netmapfwd of pptk comparable with ODP:

diff --git a/netmap/netmapfwd.c b/netmap/netmapfwd.c
index a16ea8b..4f911d8 100644
--- a/netmap/netmapfwd.c
+++ b/netmap/netmapfwd.c
@@ -27,8 +27,10 @@ int main(int argc, char **argv)
   }
 
   memset(&nmr, 0, sizeof(nmr));
+#if 0
   nmr.nr_rx_slots = 256;
   nmr.nr_tx_slots = 64;
+#endif
   dlnmd = nm_open(argv[1], &nmr, 0, NULL);
   if (dlnmd == NULL)
   {
@@ -36,8 +38,10 @@ int main(int argc, char **argv)
     exit(1);
   }
   memset(&nmr, 0, sizeof(nmr));
+#if 0
   nmr.nr_rx_slots = 256;
   nmr.nr_tx_slots = 64;
+#endif
   ulnmd = nm_open(argv[2], &nmr, 0, NULL);
   if (ulnmd == NULL)
   {

Now, my expectation would be for ODP to have minimal overhead when compared with netmap. However, with 1414.pcap ODP has on the test machine (Intel(R) Xeon(R) CPU E3-1230 v5 @ 3.40GHz) performance of about 2.2 MPPS (25 Gbps) and my own netmapfwd has performance of about 4-4.5 MPPS (45-50 Gbps). This figure of my own netmapfwd was obtained with the patch to remove setting of nr_rx_slots and nr_tx_slots to the most optimal values.

I also tried:

scapy
>>> wrpcap("ethermin.pcap", Ether()/IP()/UDP()/((60-20-8-14)*"\0"))

With this ethermin.pcap, netmapfwd has performance of about 10 MPPS with the patch to remove setting of nr_rx_slots and nr_tx_slots to the most optimal values. ODP's odp_l2fwd_simple has performance of just about 5.5 MPPS.

I also tried modifyng netmapreplay.c to not have the nr_tx_slots / nr_rx_slots settings. Now perf of my application with large packets is 2.9 MPPS, perf of odp_l2fwd_simple is 1.3 MPPS. With small packets, perf of my application is 7.1 MPPS and perf of odp_l2fwd_simple is 2.8 MPPS.

Clearly, the overhead of ODP shouldn't be that large! And I'm using the latest version of ODP.

This isn't just limited to the simplest of possible applications. I have also made a TCP SYN proxy using netmap (https://github.com/jmtilli/nmsynproxy) that does some packet processing without just forwarding all packets in a dumb manner. Performance of ODP version is 1.4 MPPS and performance of raw netmap version is 2.4 MPPS (71% performance benefit). To test with netmap, edit conf.txt to have a row with "test_connections;" (without quotes), and run in different terminal windows:

sudo ./nmsynproxy/synproxy/nmsynproxy vale0:1 vale1:1
sudo ./nmsynproxy/synproxy/netmapsend vale0:0

Then compile with ODP by editing opts.mk to have:

WITH_NETMAP=yes
NETMAP_INCDIR=/wherever/is/netmap/sys
WITH_ODP=yes
ODP_DIR=/wherever/odp/is/installed

Also, LIBS_ODPDEP may need to be set in opts.mk to fix any possible linking issues. Then run in different terminal windows:

sudo ./nmsynproxy/synproxy/odpsynproxy vale0:1 vale1:1
sudo ./nmsynproxy/synproxy/netmapsend vale0:0

...and you observe the performance difference (although by editing netmapsend and nmsynproxy to not have the nr_tx_slots and nr_rx_slots code lines, performance of the netmap version drops to 1.55 MPPS, and performance of ODP version drops to 1.0 MPPS -- still 55% performance benefit of raw netmap).

API: crypto: return value of `odp_crypto_cipher_capability`/`odp_crypto_auth_capability`

Currently we enforce that crypto capabilities retrieval functions always return same value for 'number of capability structures'. This helps one to use following code sequence easily:

int num_caps = odp_crypto_cipher_capability(ODP_CIPHER_ALG_AES_CBC, NULL, 0);
odp_crypto_cipher_capability_t capa[num_caps];
int ret = odp_crypto_cipher_capability(ODP_CIPHER_ALG_AES_CBC, capa, num_caps);
int i;

for (i = 0; i < num_capa; i++)
    do_something(&capa[i]);

For ODP-DPDK this might lead to inclusion of duplicate or overlapping entries in capabilties structure.
Options to consider:

  • leave API as is, document that caps might include overlapping entries
  • change requirement so that ret <= num_capa, forcing users to loop till ret in the second loop
  • ???

TCP transmission failed to send packets

I built a simple three-point network to use odp&ofp (both are master version) to forward packets. The central point runs ofp&odp as a router, and the other two runs iperf as client and server, without odp$ofp.
UDP transmission is good. When using iperf through ofp & odp to transmit TCP packets, the client can only send several packets after connection established and then it nearly failed to continue. The speed is very slow.
I use wireshark to capture the packets and do some tests, and find:

  1. If TCP packets is larger than the MSS(1460 Bytes), it will trigger TCP retransmission. Each time the ACK from server expects a 1448 length increasing, while the packet length from the client increasing fast. After several packets, TCP flow will be out of order. In other words, the ACK from receiver limits the TCP packets increasing and leads to transmission fail.
  2. If I set TCP packets no larger than MSS, the receiving port can receive expected packets, but the packets captured from transmitting port are still larger than MSS. It seems very possible that the ODP or OFP reassemble the packets. (wireshark shows IPv4 fragmentation flag is not set)

Is there anyone have some ideas about this problem, or have got similar situation before?
Thanks.

Multiple producer single consumer queue params

Hello,
I am very new to ODP. We are currently looking to enhance performance by porting existing LTE User Plane application to OFP/ODP-DPDK. In our architecture we need multiple producer single consumer queues. As per simple example tests we have found that the op_mode MT_UNSAFE gives a very high performance as compared to MT_SAFE. Can we use MT_UNSAFE even if there are multiple threads trying to enque into the queue at same time? Also, how can we use the nonblocking setting to our advantage. Are these queues similar to the ring library that DPDK provides. If not is there any way I can use the ring library of DPDK through ODP APIs?

Traffic Manager - Queue management

Hello,

context : I use Traffic manager to create and destroy dynamically queue (on demand), each queue is different and specific for one flow (ip src / ip dst / ... ). rules are defined by sdn controler

Creation queue with TM (odp_tm_queue_create) has been designed to be use statically or by addition :
each queue created is recorded into the TM queue pool (tm_system->queue_num_tbl) at the position defined by an incremented number (tm_system->next_queue_num++)
in my case, depending of the network activity, queues are created and freed dynamically, but in the case of release, the queue position in tm queue pool is non-reusable for a futur creation.
in the end, after multiple creation / release, i have consumed all position of tm queue pool.

Global queue pool (tm_glb->queue_obj.obj) is dynamically managed and do not have this problem : the first queue found with status FREE is used.
Why in this case not reuse the position from global queue pool to create this queue into tm queue pool ? (it's not a best solution, but it will fix this problem)

extract of the code.
odp_traffic_mgnr_internal.h

tm_queue_obj_t     *queue_num_tbl[ODP_TM_MAX_TM_QUEUES];

odp_traffic_mgnr.c

odp_tm_queue_t odp_tm_queue_create(odp_tm_t odp_tm, const odp_tm_queue_params_t *params) {
	...
	for (i = 0; i < ODP_TM_MAX_TM_QUEUES; i++) {
		_odp_int_queue_pool_t  int_queue_pool;

		queue_obj = tm_qobj_from_index(i);

		if (queue_obj->status != TM_STATUS_FREE)
			continue;
			
		...
		
		odp_tm_queue = MAKE_ODP_TM_QUEUE(queue_obj);
		memset(queue_obj, 0, sizeof(tm_queue_obj_t));
		
		...
		
		queue_obj->queue_num = tm_system->next_queue_num++;
		
		...
		
		tm_system->queue_num_tbl[queue_obj->queue_num - 1] = queue_obj;
		
		...
		
		queue_obj->status = TM_STATUS_RESERVED;
		
		...
	}
}```

best regards

Separate IP ID allocation for transport and tunnel mode SAs may cause duplicate IDs

Janne Peltonen 2018-10-03 10:20:21 UTC
Separate IP ID allocation for transport and tunnel mode SAs may cause duplicate IDs.

The IPsec implementation allocates IPv4 IDs for tunnel mode packets but copies the ID from the plain text packet in transport mode.

This can violate the IP ID uniquenes requirement when there are both transport mode and tunnel mode SAs between the same endpoints.

The ODP API does not explicitly say how IPv4 IDs are generated in transport mode. If the unstated intent of the API is to have ODP implementation generate the IP ID in all cases, then this problem should be fixed as a bug in the current implementation and maybe also the API text should be clarified. Alternatively, this can be seen as a change request to the API and then corresponding implementation change (i.e. not a bug).

I am filing this as a bug now based on my interpretation of the discussion in the architecture meeting this Monday.

Comment 1 Bill Fischofer 2018-10-03 20:22:08 UTC
Assigning to Dmitry for IPsec review.

Comment 2 Dmitry Eremin-Solenikov 2018-12-06 13:29:10 UTC
Well. Please correct me if I'm wrong, but IPsec RFCs do not specify that in Transport mode IPv4 ID should be constructed. So, if I understand correctly, is that it should not be changed when transforming the packet.

Comment 3 Janne Peltonen 2018-12-10 13:18:01 UTC
RFC 791 and RFC 6864 specify uniqueness criteria for the IP ID field. Those criteria have to be met also with IPsec even if IPsec RFCs do not say so explicitly.

Now an IP host/router implementation that is using ODP and ODP IPsec may end up sending two AH or ESP packets (one transport mode packet, one tunnel mode packet) with the same source and destination and with the same IP ID value very close to each other. This is wrong and can prevent successful reassembly of those packets if they get fragmented.

To put it in another way, an IP endpoint cannot generate the IP ID value independently for different packets that have the same (source, destination, protocol) -tuple, but that is what now happens with ODP.

Scheduler classifier and LAGs in ODP

Hello,

I have following unrelated queries:

  • Does ODP Support Link Aggregate Groups. If yes how?

  • When using Scheduler/Classifier API over DPDK how do we know if the flow matching is being done at Hardware or Software. Is this a compile time option or can I check this using capability query?

Regards,
Neetika

spsc queues fail on ppc64le

Dmitry Eremin-Solenikov 2018-09-04 08:15:48 UTC
See attached build log.

Comment 1 Dmitry Eremin-Solenikov 2018-09-04 08:18:52 UTC

  Test: queue_test_burst_spsc ...FAILED
    1. queue.c:308  - ev != ODP_EVENT_INVALID
    2. queue.c:308  - ev != ODP_EVENT_INVALID
    3. queue.c:308  - ev != ODP_EVENT_INVALID
    4. queue.c:308  - ev != ODP_EVENT_INVALID
    5. queue.c:308  - ev != ODP_EVENT_INVALID
    6. queue.c:308  - ev != ODP_EVENT_INVALID
    7. queue.c:308  - ev != ODP_EVENT_INVALID
    8. queue.c:308  - ev != ODP_EVENT_INVALID
    9. queue.c:308  - ev != ODP_EVENT_INVALID

...
  Test: queue_test_burst_lf_spsc ...FAILED
    1. queue.c:308  - ev != ODP_EVENT_INVALID
    2. queue.c:308  - ev != ODP_EVENT_INVALID
    3. queue.c:308  - ev != ODP_EVENT_INVALID
    4. queue.c:308  - ev != ODP_EVENT_INVALID
    5. queue.c:308  - ev != ODP_EVENT_INVALID
    6. queue.c:308  - ev != ODP_EVENT_INVALID
    7. queue.c:308  - ev != ODP_EVENT_INVALID
    8. queue.c:308  - ev != ODP_EVENT_INVALID
.....
  Test: queue_test_pair_spsc ...Seq error: expected 1750701552, recv 1
Seq error: expected 1750701552, recv 2
Seq error: expected 1750701552, recv 3
Seq error: expected 1750701552, recv 4
Seq error: expected 1750701552, recv 5
Seq error: expected 1750701552, recv 6
Seq error: expected 1750701552, recv 7
Seq error: expected 4057936256, recv 1
Seq error: expected 4057936256, recv 2
Seq error: expected 4057936256, recv 3
Seq error: expected 4057936256, recv 4
Seq error: expected 4057936256, recv 5
Seq error: expected 4057936256, recv 6
Seq error: expected 4057936256, recv 7
Seq error: expected 1750701552, recv 0
Seq error: expected 4057936256, recv 0
Seq error: expected 1750701552, recv 1
Seq error: expected 4057936256, recv 1
Seq error: expected 1750701552, recv 2
Seq error: expected 4057936256, recv 2
Seq error: expected 1750701552, recv 3
Seq error: expected 4057936256, recv 3
Seq error: expected 1750701552, recv 4
Seq error: expected 4057936256, recv 4
Seq error: expected 1750701552, recv 5
Seq error: expected 4057936256, recv 5
Seq error: expected 1750701552, recv 6
Seq error: expected 4057936256, recv 6
Seq error: expected 1750701552, recv 7
Seq error: expected 4057936256, recv 7
Seq error: expected 1750701552, recv 0
Seq error: expected 4057936256, recv 0
Seq error: expected 1750701552, recv 1
Seq error: expected 4057936256, recv 1
Seq error: expected 1750701552, recv 2
Seq error: expected 4057936256, recv 2
Seq error: expected 1750701552, recv 3
Seq error: expected 4057936256, recv 3
Seq error: expected 1750701552, recv 4
Seq error: expected 4057936256, recv 4
Seq error: expected 1750701552, recv 5
Seq error: expected 4057936256, recv 5
Seq error: expected 1750701552, recv 6
Seq error: expected 4057936256, recv 6
Seq error: expected 1750701552, recv 7
Seq error: expected 4057936256, recv 7
Seq error: expected 1750701552, recv 0
Seq error: expected 4057936256, recv 0
Seq error: expected 1750701552, recv 1
Seq error: expected 4057936256, recv 1
Seq error: expected 1750701552, recv 2
Seq error: expected 4057936256, recv 2
Seq error: expected 1750701552, recv 3
Seq error: expected 4057936256, recv 3
Seq error: expected 1750701552, recv 4
Seq error: expected 4057936256, recv 4
Seq error: expected 1750701552, recv 5
Seq error: expected 4057936256, recv 5
Seq error: expected 1750701552, recv 6
Seq error: expected 4057936256, recv 6
Seq error: expected 1750701552, recv 7
Seq error: expected 4057936256, recv 7
Seq error: expected 1750701552, recv 0
Seq error: expected 4057936256, recv 0
Seq error: expected 1750701552, recv 1
Seq error: expected 4057936256, recv 1
Seq error: expected 1750701552, recv 2
Seq error: expected 4057936256, recv 2
Seq error: expected 1750701552, recv 3
Seq error: expected 4057936256, recv 3
Seq error: expected 1750701552, recv 4
Seq error: expected 4057936256, recv 4
Seq error: expected 1750701552, recv 5
Seq error: expected 4057936256, recv 5
Seq error: expected 1750701552, recv 6
Seq error: expected 4057936256, recv 6
Seq error: expected 1750701552, recv 7
Seq error: expected 4057936256, recv 7
Seq error: expected 1750701552, recv 0
Seq error: expected 4057936256, recv 0
Seq error: expected 1750701552, recv 1
Seq error: expected 4057936256, recv 1
Seq error: expected 1750701552, recv 2
Seq error: expected 4057936256, recv 2
Seq error: expected 1750701552, recv 3
Seq error: expected 4057936256, recv 3
Seq error: expected 1750701552, recv 4
Seq error: expected 4057936256, recv 4
Seq error: expected 1750701552, recv 5
Seq error: expected 4057936256, recv 5
Seq error: expected 1750701552, recv 6
Seq error: expected 4057936256, recv 6
Seq error: expected 1750701552, recv 7
Seq error: expected 4057936256, recv 7
Seq error: expected 1750701552, recv 0
Seq error: expected 4057936256, recv 0
Seq error: expected 1750701552, recv 1
Seq error: expected 4057936256, recv 1
Seq error: expected 1750701552, recv 2
Seq error: expected 4057936256, recv 2
Seq error: expected 1750701552, recv 3
Seq error: expected 4057936256, recv 3
Seq error: expected 1750701552, recv 4
Seq error: expected 4057936256, recv 4
Seq error: expected 1750701552, recv 5
Seq error: expected 4057936256, recv 5
Seq error: expected 1750701552, recv 6
Seq error: expected 4057936256, recv 6
Seq error: expected 1750701552, recv 7
Seq error: expected 4057936256, recv 7
Seq error: expected 1750701552, recv 0
Seq error: expected 4057936256, recv 0
Seq error: expected 1750701552, recv 1
Seq error: expected 4057936256, recv 1
Seq error: expected 1750701552, recv 2
Seq error: expected 4057936256, recv 2
Seq error: expected 1750701552, recv 3
Seq error: expected 4057936256, recv 3
Seq error: expected 1750701552, recv 4
Seq error: expected 4057936256, recv 4
Seq error: expected 1750701552, recv 5
Seq error: expected 4057936256, recv 5
Seq error: expected 1750701552, recv 6
Seq error: expected 4057936256, recv 6
Seq error: expected 4057936256, recv 7
Seq error: expected 4057936256, recv 0
Seq error: expected 1750701552, recv 7
Seq error: expected 4057936256, recv 1
Seq error: expected 1750701552, recv 0
Seq error: expected 4057936256, recv 2
Seq error: expected 1750701552, recv 1
Seq error: expected 4057936256, recv 3
Seq error: expected 1750701552, recv 2
Seq error: expected 4057936256, recv 4
Seq error: expected 4057936256, recv 5
Seq error: expected 1750701552, recv 3
Seq error: expected 4057936256, recv 6
Seq error: expected 1750701552, recv 4
.....

Maximum number of scheduling groups is restricting and unintuitively retrieved

Bogdan Pricope 2018-08-01 07:11:04 UTC
Maximum number of scheduling groups is very short:

  • for basic scheduler 28 (NUM_SCHED_GRPS - SCHED_GROUP_NAMED)
  • for SP scheduler is 9
  • for iquery scheduler is 253 (ok)
  • etc.

Ideally, unless HW limitations, ODP should be able to create one scheduler group per core.

Also, retrieving the number of scheduler groups with queue capabilities API (odp_queue_capability()) is un-intuitive, considering the fact that ODP has a certain number of scheduler related APIs (odp_schedule_group_create(), odp_schedule_group_destroy(), odp_schedule_group_join(), odp_schedule_group_info(), etc.)

tap_pktio_recv why not just use the packet's buffer space to recv packets

a question: I can't understand why not just use the packet's buffer space to recv packets from tap dev? so, we can reduce one time packet copy.
such as:
retval = read(tap->fd, (uint8_t*)_odp_packet_data(pkt), BUF_SIZE);

origin:
for (i = 0; i < num; i++) {
do {
retval = read(tap->fd, buf, BUF_SIZE);
} while (retval < 0 && errno == EINTR);

	if (ts != NULL)
		ts_val = odp_time_global();

	if (retval < 0) {
		__odp_errno = errno;
		break;
	}

	pkts[i] = pack_odp_pkt(pktio_entry, buf, retval, ts);
	if (pkts[i] == ODP_PACKET_INVALID)
		break;
}

Error when making odp

Hello. There are some errors when I make odp codes. All the required packages have been installed. The making environment is CentOS 7 with kernel 3.10.0, gcc, gcc++, libgcc with 4.8.5, ODP with the latest version v1.18.0.1

make[2]: Entering directory `/home/admin/Downloads/odp-master/test/performance'
  CCLD     odp_bench_packet
/home/admin/Downloads/odp-master/lib/.libs/libodp-linux.a(odp_libconfig.o): In function `lookup_int':
/home/admin/Downloads/odp-master/platform/linux-generic/odp_libconfig.c:110: undefined reference to `config_lookup_int'
/home/admin/Downloads/odp-master/lib/.libs/libodp-linux.a(odp_libconfig.o): In function `lookup_int':
odp_libconfig.c:(.text+0x77): undefined reference to `config_lookup_int'
/home/admin/Downloads/odp-master/lib/.libs/libodp-linux.a(odp_libconfig.o): In function `_odp_libconfig_init_global':
odp_libconfig.c:(.text+0xaa): undefined reference to `config_init'
odp_libconfig.c:(.text+0xb4): undefined reference to `config_init'
odp_libconfig.c:(.text+0xc3): undefined reference to `config_read_string'
odp_libconfig.c:(.text+0xeb): undefined reference to `config_read_file'
odp_libconfig.c:(.text+0x107): undefined reference to `config_lookup_string'
odp_libconfig.c:(.text+0x123): undefined reference to `config_lookup_string'
odp_libconfig.c:(.text+0x13b): undefined reference to `config_lookup_string'
odp_libconfig.c:(.text+0x157): undefined reference to `config_lookup_string'
odp_libconfig.c:(.text+0x1b8): undefined reference to `config_destroy'
odp_libconfig.c:(.text+0x1c2): undefined reference to `config_destroy'
/home/admin/Downloads/odp-master/lib/.libs/libodp-linux.a(odp_libconfig.o): In function `_odp_libconfig_term_global':
odp_libconfig.c:(.text+0x2ba): undefined reference to `config_destroy'

I think it may be due to the libconfig.h. But I have set the right path of head file in the libconfig.pc file of pkgconfig, and those undefined functions have already declared in the libconfig.h files, such as:

extern LIBCONFIG_API void config_init(config_t *config);
extern LIBCONFIG_API void config_destroy(config_t *config);

Does anyone have an idea how to solve this problem? thank you.

max_size requires clarification for ODP_NONBLOCKING_LF and ODP_NONBLOCKING_WF

I've noticed that there is an undefined behavior for non-blocking queue types with relation to max_size. Unit tests expect it always to be bounded. This is different from the BLOCKING queue max_size logic.
See:

uint32_t max_size;

Question is whether max_size for non-blocking queue types can be non-bounded (max_size == 0) or not? If yes, then unit tests have to be aligned as currently they'll fail (burst size will be 0).

if (capa.plain.lockfree.max_size < max_burst)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.