Giter VIP home page Giter VIP logo

dpdk-ans's Introduction

TCP/IP stack for dpdk


ANS(accelerated network stack) is DPDK native TCP/IP stack and also refer to FreeBSD implementation. ANS provides a userspace TCP/IP stack for use with the Intel DPDK.


  • ans: accelerated network stack process.

  • librte_ans: TCP/IP stack static library. ANS use dpdk mbuf, ring, memzone, mempool, timer, spinlock. so zero copy mbuf between dpdk and ANS.

  • librte_anssock: ANS socket lib for application, zero copy between ANS and application.

  • librte_anscli: ANS cli lib for route/ip/neigh/link configuration.

  • cli: Command for configure ANS tcp/ip stack.

  • example: ANS application example.

  • test: Example application with ANS for testing ANS tcp/ip stack

Support environment

  • EAL is based on dpdk-18.11;
  • linux version: 4.4.0-45-generic (Ubuntu 16.04.1 LTS).
  • gcc version: gcc version 5.4.0 20160609.

Support feature:

  • ANS initialize;
  • ANS run in container;
  • Ether, zero copy between NIC and ANS TCP/IP stack;
  • ARP, ARP timeout;
  • IP layer, IP fragmentation and reassemble;
  • High performance routing;
  • ICMP;
  • ACL;
  • Bypass traffic to linux kernel;
  • Sync IP/Route from linux kernel;
  • Support dynamic routing(OSPF/BGP...);
  • Support DHCP client;
  • Command Line Interface:
    • Adding, deleting, showing IP addres;
    • Adding, deleting, showing static route;
    • Showing neigh table;
    • Showing interface and statistics;
    • Showing IP statistics;
    • Adding, deleting, showing ACL;
    • Adding, deleting, showing bypass rule;
    • Showing port queue lcore mapping;
    • Adding, deleting, showing flow filter rule;
  • UDP protocol;
  • Socket layer;
  • Socket API compatible with BSD, APP can choice ANS socket or linux socket based on a switch.
    • socket/bind/connect/listen/close/send/recv/epoll/writev/readv/shutdown...;
  • Support openssl;
  • TCP protocol;
    • Support reliable transmission;
    • Support dupack-based retransmission, timeout-based retransmission;
    • Support flow control;
    • Support congestion control: newreno/cubic/vegas...;
    • Support maximum segment size;
    • Support selective acknowledgments;
    • Support window scaling;
    • Support TCP timestamps;
    • Support TCP ECN;
    • Support keep alive;
    • Support SO_REUSEPORT, multi application can listen the same port;
    • Support multicore tcp stack, per tcp stack per lcore;
    • Support TSO.
  • Vrouter;
    • Support vhost;
    • Support virtio-user;
    • Support kni;
    • Support tap;
  • Hardware;
    • x86: broadwell, haswell, ivybridge, knl, sandybridge, westmere and so on.
    • arm: arm64 SoC and edge computer;

ANS User Guide


# git clone https://github.com/ansyun/dpdk-ans.git
# export RTE_ANS=/home/mytest/dpdk-ans
# ./install_deps.sh
# cd ans
# make

ANS User Guide

ANS Architecture


TCP Deployment


         |-------|       |-------|       |-------|
APP      |anssock|       |anssock|       |anssock|
         |-------|       |-------|       |-------|
             |               |               |			
            fd              fd              fd
--------------------------------------------------
ANS          |               |               |
         |-------|       |-------|       |-------|
         | TCP   |       |  TCP  |       | TCP   |
         |---------------------------------------|       
         |               IP/ARP/ICMP             |
         |---------------------------------------|       
         |LCORE0 |       |LCORE1 |       |LCORE2 |
         |-------|       |-------|       |-------|
             |               |               |
         |---------------------------------------| 
         |                  NIC + RSS            | 
         |---------------------------------------| 
  • NIC distribute packets to different lcore based on RSS, so same TCP flow are handled in the same lcore.
  • Each lcore has own TCP stack, free lock.
  • IP/ARP/ICMP are shared between lcores.
  • APP process runs as a tcp server.
    • If App process only creates one listen socket, the listen socket only listens on one lcore and accept tcp connections from the lcore, so the APP process number shall large than the lcore number. The listen sockets of APP processes are created on each lcore averagely. For example: ans(with -c 0x3) run on two lcore, shall run two nginx(only run master ), one nginx listens on lcore0, another nginx listens on lcore1.
  • APP process can bind the same port if enable reuseport, APP process could accept tcp connection by round robin.
  • If NIC don't support multi queue or RSS, shall enhance ans_main.c, reserve one lcore to receive and send packets from NIC, and distribute packets to lcores of ANS tcp stack by software RSS.

Performance Testing


ANS Performance Test Report

  • TCP server performance testing
    |------------------------------| 
    |      TCP Server CPS          |
    |------------------------------| 
    |       ANS with epoll         | 
    |         (one core)           |
    |------------------------------|
    |     100k connection/s         | 
    |------------------------------| 
  • L3 forwarding with NIC performance testing

    ENV: CPU- Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz, NIC- Intel 82599ES 10-Gigabit, Test tool:pktgen-DPDK

    |--------------------------------------| 
    |      L3 forwarding performance       |
    |             (one lcore)              |
    |--------------------------------------| 
    | Packet size(byte)| Throughput(Mpps)  | 
    |--------------------------------------|
    |     64           |       11.78       | 
    |--------------------------------------| 
    |     128          |      Line Rate    | 
    |--------------------------------------| 
 
  • L3 forwarding with vhost/virtio performance testing

    ENV: Intel(R) Xeon(R) CPU E5-2618L v4 @ 2.20GHz, NIC- vhost/virtio, Test tool:pktgen-DPDK

    |--------------------------------------| 
    |      L3 forwarding performance       |
    |             (one lcore)              |
    |--------------------------------------| 
    | Packet size(byte)| Throughput(Mpps)  | 
    |--------------------------------------|
    |     64           |       6.33        | 
    |--------------------------------------| 
    |     128          |       5.94        | 
    |--------------------------------------| 
    |     256          |     Line Rate     | 
    |--------------------------------------| 
  • dpdk-redis performance testing
====ENV=== 
CPU:Intel(R) Xeon(R) CPU E5-2609 v4 @ 1.70GHz.
NIC:Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 
ANS run on a lcore.

root@ubuntu:~/src/dpdk-redis# ./src/redis-benchmark -h 10.0.0.2 -p 6379 -n 100000 -c 50 -q
PING_INLINE: 138888.89 requests per second
PING_BULK: 141242.94 requests per second
SET: 140449.44 requests per second
GET: 141043.72 requests per second
INCR: 141442.72 requests per second
LPUSH: 141043.72 requests per second
LPOP: 140449.44 requests per second
SADD: 141643.06 requests per second
SPOP: 141843.97 requests per second
LPUSH (needed to benchmark LRANGE): 141442.72 requests per second
LRANGE_100 (first 100 elements): 48192.77 requests per second
LRANGE_300 (first 300 elements): 14330.75 requests per second
LRANGE_500 (first 450 elements): 10405.83 requests per second
LRANGE_600 (first 600 elements): 7964.95 requests per second
MSET (10 keys): 107758.62 requests per second

  • dpdk-nginx CPS performance
CPU: Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
NIC:82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 
ANS run on a lcore.
4 dpdk-nginx run on ANS.

# ./wrk --timeout=1 --latency -H "Connection: close" -t20 -c100 -d30s http://10.0.0.2
Running 30s test @ http://10.0.0.2
  20 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   480.80us   73.12us   4.66ms   88.86%
    Req/Sec     5.32k   125.94     6.98k    84.30%
  Latency Distribution
     50%  478.00us
     75%  505.00us
     90%  535.00us
     99%  648.00us
  3186335 requests in 30.10s, 2.51GB read
Requests/sec: 105860.26
Transfer/sec:     85.31MB

ANS run on two lcore.
8 dpdk-nginx run on ANS.
# ./wrk --timeout=1 --latency -H "Connection: close" -t20 -c100 -d30s http://10.0.0.2
Running 30s test @ http://10.0.0.2
  20 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   251.70us   89.45us   4.39ms   68.59%
    Req/Sec    10.28k   340.17    12.40k    67.67%
  Latency Distribution
     50%  246.00us
     75%  310.00us
     90%  363.00us
     99%  480.00us
  6155775 requests in 30.10s, 4.84GB read
Requests/sec: 204512.88
Transfer/sec:    164.81MB

  • dpdk-nginx QPS performance
CPU: Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz
NIC:82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01) 
ANS run on a lcore.
8 dpdk-nginx run on ANS.

# ./wrk --timeout=1 --latency -t20 -c100 -d30s http://10.0.0.2
Running 30s test @ http://10.0.0.2
  20 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   257.91us  407.36us  24.37ms   98.77%
    Req/Sec    20.85k     1.84k   26.88k    70.23%
  Latency Distribution
     50%  214.00us
     75%  289.00us
     90%  338.00us
     99%  825.00us
  12488349 requests in 30.10s, 9.89GB read
Requests/sec: 414900.42
Transfer/sec:    336.31MB

ANS run on two lcore.
10 dpdk-nginx run on ANS.
# ./wrk --timeout=1 --latency -t20 -c100 -d30s http://10.0.0.2
Running 30s test @ http://10.0.0.2
  20 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   184.60us  165.39us  12.97ms   98.92%
    Req/Sec    26.69k     0.93k   29.92k    79.17%
  Latency Distribution
     50%  186.00us
     75%  200.00us
     90%  216.00us
     99%  370.00us
  15985217 requests in 30.10s, 12.65GB read
Requests/sec: 531077.97
Transfer/sec:    430.48MB

Examples



You can get more information and instructions from wiki page.

Notes


  • You shall use the same dpdk version as ans libs used.
  • Shall use the same gcc version to compile your application.
  • ANS socket application run as a secondary dpdk process, If you got below log, shall execute below commands to disable ASLR.
EAL: WARNING: Address Space Layout Randomization (ASLR) is enabled in the kernel.
EAL: This may cause issues with mapping memory into secondary processes
$ sudo sysctl -w kernel.randomize_va_space=0
  • You shall modify the NIC configuration in ans_main.c based on your NIC type.

  • ANS didn't support loopback interface, so socket client and server can't be in the same ANS tcp/ip stack.

  • In order to improve ANS performance, you shall isolate ANS'lcore from kernel by isolcpus and isolcate interrupt from ANS's lcore by update /proc/irq/default_smp_affinity file.

  • ANS run as dpdk primary process, when startup ANS, shall stop other secondary processes(nginx/redis/http_server).

  • Don't run ANS on lcore0, it will effect ANS performance.

  • You shall include dpdk libs as below way because mempool lib has attribute((constructor, used)) in latest version, otherwise your application would coredump.

  -$(RTE_ANS)/librte_anssock/librte_anssock.a \
  -L$(RTE_SDK)/$(RTE_TARGET)/lib \
  -Wl,--whole-archive -Wl,-lrte_mbuf -Wl,-lrte_mempool_ring -Wl,-lrte_mempool -Wl,-lrte_ring -Wl,-lrte_eal -Wl,--no-whole-archive -Wl,-export-dynamic -lnuma \

Support


BSD LICENSE, you may use ANS freely.

For free support, please use ANS team mail list at [email protected], or QQ Group:86883521 (FULL), 519855957(Full), 719128726, or https://dpdk-ans.slack.com, or skype(bluenet13).

dpdk-ans's People

Contributors

bluenet13 avatar bpsz avatar eaglerayp avatar ibreaker avatar jeltef avatar luqiuwen avatar rfsvasconcelos avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dpdk-ans's Issues

Failed to init socket

Hi,

I am trying to compile and run the sample http server. However its unable to initialize the socket properly.
Could you please help as source code is not available for socket library?

ubuntu@dpdk$ sudo ./http_server
affinity to 0 core by default
EAL: Detected 2 lcore(s)
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
USER8: LCORE[-1] anssock any lcore id 0xffffffff
USER8: LCORE[-1] Can't find ans memzone
init sock failed
USER8: LCORE[-1] socket: anssock isn't init, (nil)
socket error

PF_PACKET domain socket support

Could you implement PF_PACKET domain socke in dpdk-ans?
thanks very much!

Some user may wanna use PF_PACKET domain socket to receive or transmit original ethernet frames.
Bellow are the sample codes.

    int ret;
    struct sockaddr_ll sock_addr = {
        .sll_family = AF_PACKET,
        .sll_protocol = 0,
        .sll_ifindex =0
    };

    int fd = socket(AF_PACKET, SOCK_RAW, htons(ETH_P_ALL));
   bind(fd, (struct sockaddr *)&sock_addr, sizeof(struct sockaddr_ll));

Blocking sockets

Some sockets can be useful to start of blocking so initialization can happen synchronously. Is it possible to use blocking sockets using this library?

Latency number of short messages?

Hi,

Low latency is important in my use case so I am curious about the round-trip time of short messages (say, 100 bytes) within an established connection between two hosts in the same local cluster using ANS. Has anyone tried that before? In my testbed with 10Gb network, it's ~25 us using kernel TCP. Thanks.

routes problem

I think it somewhere in librte_ans.a. But i dont have sources.

I can run 2 nginx at one ans on different ports. They will be available outside. But they cant reach one another by proxy_pass because of route issue.
From begin I have routes:

ANS IP routing table
Destination      Gateway          Netmask          Flags   Iface           
10.0.0.0         *                255.255.255.0    U C     eth0                             
10.0.0.101       *                255.255.255.255  U H L   eth0 

10.0.0.101 is outside client machine. ANS have IP 10.0.0.2.

When i try from client machine 10.0.0.101 curl 10.0.0.2:91 -> nginx proxypass ->10.0.0.2:80

ANS IP routing table
Destination      Gateway          Netmask          Flags   Iface           
10.0.0.0         *                255.255.255.0    U C     eth0            
10.0.0.2         *                255.255.255.255  U H L                   
10.0.0.101       *                255.255.255.255  U H L   eth0      

There is bad gateway 10.0.0.2 . Which added automatically.

netdpcmd fail to run

Hi,

i tried to run netdpcmd:

sudo ./build/netdpcmd
here is the error information:
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Detected lcore 4 as core 0 on socket 0
EAL: Detected lcore 5 as core 1 on socket 0
EAL: Detected lcore 6 as core 2 on socket 0
EAL: Detected lcore 7 as core 3 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 8 lcore(s)
PANIC in rte_eal_config_attach():
Cannot open '/var/run/.rte_config' for rte_mem_config
6: [./build/netdpcmd() [0x4299b3]]
5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f9ad0132ec5]]
4: [./build/netdpcmd(main+0x6c) [0x42854c]]
3: [./build/netdpcmd(rte_eal_init+0xd1d) [0x49b2fd]]
2: [./build/netdpcmd(__rte_panic+0xc9) [0x422b10]]
1: [./build/netdpcmd(rte_dump_stack+0x18) [0x4a19a8]]

can you give any suggestions?

thanks

opendp crashed after another client run linux_udp

Hello, I tried to run Demo::UDP socket.

I used dpdk-2.1.0 and follow wiki settings.
After starting opendp ,setting IP, routing and running dpdk_udp.
I started linux_udp (and just modified IP in the src code) on another PC in same subnetwork.
(Before start linux_udp, I can ping the dpdk machine).
But, my "opendp" program just end after linux_udp runned without warning.

Checking link status ..........................done
Port 0 Link Up - speed 1000 Mbps - full-duplex
USER8: main loop on lcore 0
USER8:  -- lcoreid=0 portid=0 rxqueueid=0
nb ports 1 hz: 3591685372 
ray $     // look at this,  no ctrl+c or something

I don't know what's going on there. Could you help?
If needs any support, I can provide more data and description.
Thanks!

could librte_netdp provides arp lookup and route lookup API?

Currenty dpdk-ans support only few types of socket.
In such situation, some communication reqirements may not be satisfied.
If librte_netdp provides arp lookup and route lookup API which was not provided by dpdk, user can utilize these APIs to implement their own specific reqirements which can not be satisfied by only UDP or TCP sockets.
Thanks very much!

dpdk-ans is too slow compared to regular Linux for small Packet sizes

With Regular Linux. these are the numbers for MSS 128 and Packet size 64 bytes.

root@srv12:~/dpdk-iperf# ip netns exec ns1 iperf3 -c 10.0.0.2 -M128 -l64 -Z -4 -N
Connecting to host 10.0.0.2, port 5201
[ 4] local 10.0.0.3 port 59976 connected to 10.0.0.2 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 30.1 MBytes 252 Mbits/sec 6590 28.7 KBytes
[ 4] 1.00-2.00 sec 23.8 MBytes 199 Mbits/sec 4660 35.2 KBytes
[ 4] 2.00-3.00 sec 23.5 MBytes 197 Mbits/sec 1441 36.5 KBytes
[ 4] 3.00-4.00 sec 27.7 MBytes 232 Mbits/sec 1238 27.0 KBytes
[ 4] 4.00-5.00 sec 24.6 MBytes 207 Mbits/sec 989 38.9 KBytes
[ 4] 5.00-6.00 sec 26.3 MBytes 220 Mbits/sec 2383 17.0 KBytes
[ 4] 6.00-7.00 sec 26.2 MBytes 220 Mbits/sec 2380 34.1 KBytes
[ 4] 7.00-8.00 sec 19.7 MBytes 166 Mbits/sec 1315 12.1 KBytes
[ 4] 8.00-9.00 sec 30.5 MBytes 256 Mbits/sec 1837 37.2 KBytes
[ 4] 9.00-10.00 sec 25.4 MBytes 213 Mbits/sec 1148 11.7 KBytes


[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 258 MBytes 216 Mbits/sec 23981 sender
[ 4] 0.00-10.00 sec 253 MBytes 212 Mbits/sec receiver

iperf Done.
~
With DPDK IPerf,
root@srv12:~/dpdk-iperf# ./iperf3 -c 10.0.0.2 -M128 -l64 -Z
Connecting to host 10.0.0.2, port 5201
[ 5] local 10.0.0.5 port 42438 connected to 10.0.0.2 port 5201
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 5] 0.00-3.75 sec 2.11 MBytes 4.71 Mbits/sec 66 9.50 KBytes
[ 5] 3.75-3.75 sec 0.00 Bytes 0.00 bits/sec 0 9.88 KBytes
[ 5] 3.75-3.75 sec 0.00 Bytes 0.00 bits/sec 0 10.0 KBytes
[ 5] 3.75-4.00 sec 135 KBytes 4.46 Mbits/sec 146 13.2 KBytes
[ 5] 4.00-5.00 sec 214 KBytes 1.75 Mbits/sec 125 512 Bytes
[ 5] 5.00-6.00 sec 157 KBytes 1.28 Mbits/sec 82 5.25 KBytes
[ 5] 6.00-7.00 sec 172 KBytes 1.41 Mbits/sec 92 6.12 KBytes
[ 5] 7.00-8.51 sec 368 KBytes 2.00 Mbits/sec 188 6.25 KBytes
[ 5] 8.51-9.26 sec 167 KBytes 1.82 Mbits/sec 86 5.88 KBytes
An unknown state was sent by the client, ignoring it.
[ 5] 9.26-10.26 sec 174 KBytes 1.43 Mbits/sec 92 6.00 KBytes


[ ID] Interval Transfer Bandwidth Retr
[ 5] 0.00-10.26 sec 3.46 MBytes 2.83 Mbits/sec 877 sender
[ 5] 0.00-10.26 sec 3.39 MBytes 2.77 Mbits/sec receiver
~
I set #define MAX_TX_BURST 1 by referring to issue #16

Can you please help to improve this ?

Thanks!

Sudhi

Are there any plans to release the source?

Hi,

First of all, thank you for your work on this project!

Are you planning to make the source code available in the future? This is fundamental to allow outside contributions.

Best regards,
Joao

dpdk-ans not work in virtual machine

Hi

i try to use dpdk-ans between two vm ( ubuntu 1611 ) with dpdk 1607.

in ans_main.c

i move the #if 0 ... #endif for use the kvm virtio nic struct
i replace
ret = rte_eth_tx_queue_setup(portid, queueid, ANS_TX_DESC_DEFAULT, socketid, txconf);
with
ret = rte_eth_tx_queue_setup(portid, queueid, ANS_TX_DESC_DEFAULT, rte_eth_dev_socket_id(portid), NULL);

and set ANS_TX_DESC and ANS_RX_DESC to 256 in ans_main.h

the ans start with ./build/ans -c 0x1 -n 1 -w 0000:00:03.0 -- -p 0x1 --config="(0,0,0)"

see stats packet increment with dpdk-procinfo -w 0000:00:03.0 -- --stats when ping from the other vm, but can't see any reply ...

Regards,

Olivier

Error when running dpdk_tcp_client

运行dpdk_tcp_client这个程序的时候总出错,其他设置都正常,预留了64个hugepage

首先,如果是刚刚挂载hugepage,此时/mnt/huge下什么都没有

然后直接运行,会出这样的错...

EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: VFIO modules not all loaded, skip VFIO support...
EAL: Setting up physically contiguous memory...
EAL: WARNING: Address Space Layout Randomization (ASLR) is enabled in the kernel.
EAL:    This may cause issues with mapping memory into secondary processes
EAL: Analysing 64 files
EAL: Could not open /mnt/huge/rtemap_53
PANIC in rte_eal_init():
Cannot init memory
7: [./build/dpdk_tcp_client() [0x42a023]]
6: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7f20822bfec5]]
5: [./build/dpdk_tcp_client(main+0x2e) [0x428a6e]]
4: [./build/dpdk_tcp_client(netdpsock_init+0x6c) [0x42a96c]]
3: [./build/dpdk_tcp_client(rte_eal_init+0xfa7) [0x48fc07]]
2: [./build/dpdk_tcp_client(__rte_panic+0xc9) [0x423ed4]]
1: [./build/dpdk_tcp_client(rte_dump_stack+0x18) [0x496358]]

这儿报错说Could not open /mnt/huge/rtemap_53,因为/mnt/huge是空的,当然没有这个文件了.....


但运行其他dpdk自带的example是没有问题的

如果运行一次dpdk自带的example,/mnt/huge文件夹下会出现一批文件,如下:

total 128M
-rwxr-xr-x 1 root root 2.0M Sep 29 20:28 rtemap_0
-rwxr-xr-x 1 root root 2.0M Sep 29 20:28 rtemap_1
-rwxr-xr-x 1 root root 2.0M Sep 29 20:28 rtemap_10
-rwxr-xr-x 1 root root 2.0M Sep 29 20:28 rtemap_11
-rwxr-xr-x 1 root root 2.0M Sep 29 20:28 rtemap_12
-rwxr-xr-x 1 root root 2.0M Sep 29 20:28 rtemap_13
-rwxr-xr-x 1 root root 2.0M Sep 29 20:28 rtemap_14
......
-rwxr-xr-x 1 root root 2.0M Sep 29 20:28 rtemap_63
-rwxr-xr-x 1 root root 2.0M Sep 29 20:28 rtemap_7
-rwxr-xr-x 1 root root 2.0M Sep 29 20:28 rtemap_8
-rwxr-xr-x 1 root root 2.0M Sep 29 20:28 rtemap_9

然后再运行dpdk_tcp_client,会出现这样的错误:

EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: VFIO modules not all loaded, skip VFIO support...
EAL: Setting up physically contiguous memory...
EAL: WARNING: Address Space Layout Randomization (ASLR) is enabled in the kernel.
EAL:    This may cause issues with mapping memory into secondary processes
EAL: Analysing 64 files
EAL: Mapped segment 0 of size 0x200000
EAL: Mapped segment 1 of size 0x200000
EAL: Mapped segment 2 of size 0x200000
EAL: Mapped segment 3 of size 0x200000
EAL: Mapped segment 4 of size 0x200000
EAL: Mapped segment 5 of size 0x200000
EAL: Mapped segment 6 of size 0x400000
EAL: Mapped segment 7 of size 0x200000
EAL: Mapped segment 8 of size 0x800000
EAL: Mapped segment 9 of size 0xc00000
EAL: Mapped segment 10 of size 0x1c00000
EAL: Mapped segment 11 of size 0x2200000
EAL: Mapped segment 12 of size 0x800000
EAL: Mapped segment 13 of size 0x200000
EAL: Mapped segment 14 of size 0x200000
EAL: Mapped segment 15 of size 0x200000
EAL: Mapped segment 16 of size 0x200000
EAL: Mapped segment 17 of size 0x200000
EAL: Mapped segment 18 of size 0x200000
EAL: Mapped segment 19 of size 0x200000
EAL: Mapped segment 20 of size 0x200000
EAL: Mapped segment 21 of size 0x200000
EAL: Mapped segment 22 of size 0x200000
EAL: memzone_reserve_aligned_thread_unsafe(): memzone <RG_MP_log_history> already exists
RING: Cannot reserve memory
EAL: TSC frequency is ~2300002 KHz
EAL: Master lcore 0 is ready (tid=3edc0940;cpuset=[0])
EAL: PCI device 0000:00:05.0 on NUMA socket -1
EAL:   probe driver: 8086:100f rte_em_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:00:06.0 on NUMA socket -1
EAL:   probe driver: 8086:100f rte_em_pmd
EAL:   PCI memory mapped at 0x7f956fe00000
EAL: PCI device 0000:00:07.0 on NUMA socket -1
EAL:   probe driver: 8086:100f rte_em_pmd
EAL:   PCI memory mapped at 0x7f956fe20000
USER8: Can't find netdp memzone 
init sock ring failed

这个netdp memzone是什么?

同时也不是太明白这个错误是为啥......

不单单这个程序出这样的错误,其他dpdk-odp自带的例子都出错

init sock ring failed

Hi I have followed the wiki documentation but can't get data sending/receiving with the udp examples.
The anssock_init() function always fails with a return code of 1001-

./build/app/dpdk_udp

EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 1
EAL: Detected lcore 2 as core 1 on socket 0
EAL: Detected lcore 3 as core 1 on socket 1

[...]

EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 48 lcore(s)
EAL: Setting up physically contiguous memory...
EAL: Analysing 2052 files
EAL: Mapped segment 0 of size 0x40000000

[...]

EAL: Mapped segment 42 of size 0x200000
EAL: TSC frequency is ~2700000 KHz
EAL: Master lcore 0 is ready (tid=f7fca8c0;cpuset=[0])
init sock ring failed 1001

(I modified the source to print the return code). It is able to create the socket, bind, epoll, etc. There are no errors or output after the init sock ring; it waits for data but never gets any. Can we get the return code documentation to anssock_init() ?

dev environment details-

DPDK 17.08 (only a couple lines of code changed to get to compile)
RHEL7.4 compiled ans using devtoolset-6 (gcc 6.3.1).
Intel ivybridge
X710 NICs

Ans starts with out error btw -

[root@dev01 dpdk-ans]# ./ans/build/ans -c 0x1 -n 1 -- -p 0x1 --config="(0,0,0)"

EAL: Detected 48 lcore(s)
EAL: Probing VFIO support...
EAL: PCI device 0000:41:00.0 on NUMA socket 1
EAL: probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:41:00.1 on NUMA socket 1
EAL: probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:42:00.0 on NUMA socket 1
EAL: probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:42:00.1 on NUMA socket 1
EAL: probe driver: 8086:1572 net_i40e
param nb 1 ports 1
port id 0

[......]

Checking link status done
Port 0 Link Up - speed 10000 Mbps - full-duplex
USER8: main loop on lcore 0
USER8: -- lcoreid=0 portid=0 rxqueueid=0
nb ports 1 hz: 2700004458

Panic when calling anssock_init() from c++

I would like to use ANS (as of 2016-06-29 from github) from a C++ program. I could compile and link against the program successfully:

[100%] Building CXX object CMakeFiles/binxfer.dir/binxfer.cpp.o
/usr/bin/c++ -std=c++11 -I/work/dpdk-ans/librte_anssock/include -I/work/dpdk-ans/librte_ans/include -o CMakeFiles/binxfer.dir/binxfer.cpp.o -c /work/linktest/binxfer.cpp
Linking CXX executable binxfer
/usr/bin/cmake -E cmake_link_script CMakeFiles/binxfer.dir/link.txt --verbose=1
/usr/bin/c++ -std=c++11 CMakeFiles/binxfer.dir/binxfer.cpp.o -o binxfer -rdynamic /work/dpdk-ans/librte_ans/librte_ans.a /work/dpdk-ans/librte_anssock/librte_anssock.a /usr/local/share/dpdk/x86_64-native-linuxapp-gcc/lib/librte_mbuf.a /usr/local/share/dpdk/x86_64-native-linuxapp-gcc/lib/librte_eal.a /usr/local/share/dpdk/x86_64-native-linuxapp-gcc/lib/librte_mempool.a /usr/local/share/dpdk/x86_64-native-linuxapp-gcc/lib/librte_ring.a -lrt -lpthread -ldl

But the program terminates at calling anssock_init() right after the start:

./binxfer

EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 0 on socket 0
EAL: Detected lcore 3 as core 1 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 4 lcore(s)
PANIC in rte_eal_config_reattach():
Cannot mmap memory for rte_config
: [./binxfer() [0x40723b]]
6: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5) [0x7ffff6becf45]]
5: [./binxfer(main+0x164) [0x407464]]
4: [./binxfer(anssock_init+0x17a) [0x40c16a]]
3: [./binxfer(rte_eal_init+0xf69) [0x418629]]
2: [./binxfer(__rte_panic+0xc9) [0x406c85]]
1: [./binxfer(rte_dump_stack+0x1a) [0x41f7fa]]
Aborted (core dumped)

The example webserver included with dpdk-ans is working correctly on the same system.

Is it possible to use ANS from C++ applications? Do I need some specific compile/link options to do that? (Sorry if placing issue to the wrong place.)

ans fail to run

Hi, I tried to run ans, but check_port_config failed.
can you give any suggestions?
thanks

here is the error information:
./ans -c 0x1 -n 1 -- -p 0x1 --config="(0,0,0)"
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: Probing VFIO support...
EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory)
EAL: VFIO modules not loaded, skipping VFIO support...
EAL: Setting up physically contiguous memory...
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f7b0ba00000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f7b0b600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f7b0b200000 (size = 0x200000)
EAL: Ask a virtual area of 0x7800000 bytes
EAL: Virtual area found at 0x7f7b03800000 (size = 0x7800000)
EAL: Requesting 64 pages of size 2MB from socket 0
EAL: TSC frequency is ~2294676 KHz
EAL: Master lcore 0 is ready (tid=df39940;cpuset=[0])
param nb 1 ports 0
port id 0
port 0 is not present on the board
EAL: Error - exiting with code: 1
Cause: check_port_config failed

Cannot configure device: err=-22, port=0

i build ans as instructed. This is the message i got:

sudo build/ans -c 0x3 -n 2 -- -p 0x3
Start to Init port
port 0:
port name rte_virtio_pmd:
max_rx_queues 1: max_tx_queues:1
rx_offload_capa 0x0: tx_offload_capa:0x0
Creating queues: rx queue number=1 tx queue number=2...
EAL: Error - exiting with code: 1
Cause: Cannot configure device: err=-22, port=0

i heard that E1000M NICs cannot support tx_queue > 1 so i changed it to a virtio-net NIC.
Before i run into this, i didn't run l3fwd successfully either, with the same parameters. And the error messages are pretty similiar. so somebody can tell me where i did wrong? Thanks in advance. (btw, l2fwd and some other examples are just fine)

issue regarding UDP-socket demo

I am trying to recreate the UDP-socket demo, on a VM launched on ESX server using VMware.
my Linux version is 3.13.0-32-generic (ubuntu 14.04).
I am using dpdk-ans-ans-16.08. and dpdk version 17.02

I have bound one interface with the dpdk igb_uio module as described in the wiki.

Network devices using DPDK-compatible driver

0000:13:00.0 'VMXNET3 Ethernet Controller' drv=igb_uio unused=

Network devices using kernel driver

0000:03:00.0 'VMXNET3 Ethernet Controller' if=eth0 drv=vmxnet3 unused=igb_uio Active
0000:0b:00.0 'VMXNET3 Ethernet Controller' if=eth1 drv=vmxnet3 unused=igb_uio

Other network devices

Crypto devices using DPDK-compatible driver

Crypto devices using kernel driver

Other crypto devices

I have also assigned IP address to it using ODP and netdpcmd,

EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL: probe driver: 15ad:7b0 net_vmxnet3
EAL: PCI device 0000:0b:00.0 on NUMA socket -1
EAL: probe driver: 15ad:7b0 net_vmxnet3
EAL: PCI device 0000:13:00.0 on NUMA socket -1
EAL: probe driver: 15ad:7b0 net_vmxnet3
param nb 1 ports 1
port id 0

Start to Init port
port 0:
Creating queues: rx queue number=1 tx queue number=1...
MAC Address:00:0C:29:49:A7:0F
lcore id:0, tx queue id:0, socket id:0

Allocated mbuf pool on socket 0, mbuf number: 8192

Initializing rx queues on lcore 0 ...
port id:0, rx queue id: 0, socket id:0
core mask: 1, sockets number:1, lcore number:1
start to init netdp
USER8: lcore mask 0x1
USER8: lcore id 0
USER8: lcore number 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER8: max udp conn number: 129, udp conn number per lcore 129
USER8: max sock number: 4097, sock number per lcore 4097
USER8: Max application user: 6
register callback
add eth0 device
add IP 2020202 on device eth0
Show interface

eth0 HWaddr 00:0c:29:49:a7:0f
inet addr:2.2.2.2
inet addr:255.255.255.0
add static route

Destination Gateway Netmask Flags Iface
2.2.2.0 * 255.255.255.0 U C 0
2.2.2.5 * 255.255.255.255 U H L 0
3.3.3.0 2.2.2.5 255.255.255.0 U G 0

Checking link status done
Port 0 Link Up - speed 10000 Mbps - full-duplex
USER8: main loop on lcore 0
USER8: -- lcoreid=0 portid=0 rxqueueid=0
nb ports 1 hz: 2399998495

When I am starting the ans, I am hitting with,

EAL: Error - exiting with code: 1
Cause: rte_eth_tx_queue_setup: err=-22, port=0

./build/ans -c 0x1 -n 1 -- -p 0x1 --config="(0,0,0)"

EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL: probe driver: 15ad:7b0 net_vmxnet3
EAL: PCI device 0000:0b:00.0 on NUMA socket -1
EAL: probe driver: 15ad:7b0 net_vmxnet3
EAL: PCI device 0000:13:00.0 on NUMA socket -1
EAL: probe driver: 15ad:7b0 net_vmxnet3
param nb 1 ports 1
port id 0

Start to Init port
port 0:
port name net_vmxnet3:
max_rx_queues 16: max_tx_queues:8
rx_offload_capa 29: tx_offload_capa:45
Creating queues: rx queue number=1 tx queue number=1...
MAC Address:00:0C:29:49:A7:0F
Deault-- tx pthresh:0, tx hthresh:0, tx wthresh:0, txq_flags:0x200
lcore id:0, tx queue id:0, socket id:0
Conf-- tx pthresh:36, tx hthresh:0, tx wthresh:0, txq_flags:0xfffff1ff
EAL: Error - exiting with code: 1
Cause: rte_eth_tx_queue_setup: err=-22, port=0

I am confused what am I missing here which is causing this error!!!

AF_INET domain SOCK_RAW socket support

Could you implement AF_INET domain SOCK_RAW socket in dpdk-ans?
thanks very much!

The use under linux is like this
raw_socket = socket(AF_INET, SOCK_RAW, protocol);

Some user may wanna use AF_INET domain SOCK_RAW socket to receive or transmit specific IP packets.
There are more info on bellow URL:
http://linux.die.net/man/7/ip

failed to ping after run opendp

Hi,

i successfully started opendp with command: sudo ./build/opendp -c 0x1 -n 1 -- -P -p 0x1 --config="(0,0,0)"
the run info:
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Detected lcore 4 as core 0 on socket 0
EAL: Detected lcore 5 as core 1 on socket 0
EAL: Detected lcore 6 as core 2 on socket 0
EAL: Detected lcore 7 as core 3 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 8 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: VFIO modules not all loaded, skip VFIO support...
EAL: Setting up physically contiguous memory...
EAL: Ask a virtual area of 0x5400000 bytes
EAL: Virtual area found at 0x7f9130200000 (size = 0x5400000)
EAL: Ask a virtual area of 0x12400000 bytes
EAL: Virtual area found at 0x7f911dc00000 (size = 0x12400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f911d600000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f911d000000 (size = 0x400000)
EAL: Ask a virtual area of 0x1400000 bytes
EAL: Virtual area found at 0x7f911ba00000 (size = 0x1400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f911b400000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f911b000000 (size = 0x200000)
EAL: Ask a virtual area of 0x65c00000 bytes
EAL: Virtual area found at 0x7f90b5200000 (size = 0x65c00000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f90b4e00000 (size = 0x200000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f90b4800000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7f90b4200000 (size = 0x400000)
EAL: Requesting 1024 pages of size 2MB from socket 0
EAL: TSC frequency is ~3690749 KHz
EAL: Master lcore 0 is ready (tid=37be1900;cpuset=[0])
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI memory mapped at 0x7f9135600000
EAL: PCI memory mapped at 0x7f9135680000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 12, SFP+: 3
PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
EAL: PCI device 0000:01:00.1 on NUMA socket 0
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: Not managed by a supported kernel driver, skipped
Promiscuous mode selected
param nb 1 ports 1
port id 0

Start to Init port
port 0:
port name rte_ixgbe_pmd:
max_rx_queues 128: max_tx_queues:128
rx_offload_capa 31: tx_offload_capa:63
Creating queues: rx queue number=1 tx queue number=1...
MAC Address:00:1B:21:BB:7C:24
Deault-- tx pthresh:32, tx hthresh:0, tx wthresh:0, txq_flags:0xf01
lcore id:0, tx queue id:0, socket id:0
Conf-- tx pthresh:36, tx hthresh:0, tx wthresh:0, txq_flags:0xfffff1ff
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f90b4328dc0 hw_ring=0x7f90b432ae00 dma_addr=0xbc892ae00
PMD: ixgbe_set_tx_function(): Using full-featured tx code path
PMD: ixgbe_set_tx_function(): - txq_flags = fffff1ff [IXGBE_SIMPLE_FLAGS=f01]
PMD: ixgbe_set_tx_function(): - tx_rs_thresh = 32 [RTE_PMD_IXGBE_TX_MAX_BURST=32]

Allocated mbuf pool on socket 0, mbuf number: 16384

Initializing rx queues on lcore 0 ...
Default-- rx pthresh:8, rx hthresh:8, rx wthresh:0
port id:0, rx queue id: 0, socket id:0
Conf-- rx pthresh:8, rx hthresh:8, rx wthresh:4
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f90b42d82c0 sw_sc_ring=0x7f90b42d7d80 hw_ring=0x7f90b42d8800 dma_addr=0xbc88d8800
core mask: 1, sockets number:1, lcore number:1
start to init netdp
USER8: LCORE[0] lcore mask 0x1
USER8: LCORE[0] lcore id 0 is enable
USER8: LCORE[0] lcore number 1
USER1: rte_ip_frag_table_create: allocated of 25165952 bytes at socket 0
USER8: LCORE[0] UDP layer init successfully, Use memory:4194304 bytes
USER8: LCORE[0] TCP hash table init successfully, tcp pcb size 448 total size 27525120
USER8: LCORE[0] so shm memory 17039360 bytes, so number 133120, sock shm size 128 bytes
USER8: LCORE[0] Sock init successfully, allocated of 42598400 bytes
add eth0 device
add IP 2020202 on device eth0
Show interface

eth0 HWaddr 00:1b:21:bb:7c:24
inet addr:2.2.2.2
inet addr:255.255.255.0
add static route

Destination Gateway Netmask Flags Iface
2.2.2.0 * 255.255.255.0 U C 0
2.2.2.5 * 255.255.255.255 U H L 0
3.3.3.0 2.2.2.5 255.255.255.0 U G 0

USER8: LCORE[-1] NETDP mgmt thread startup
PMD: ixgbe_set_rx_function(): Port[0] doesn't meet Vector Rx preconditions or RTE_IXGBE_INC_VECTOR is not enabled
PMD: ixgbe_set_rx_function(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0.

Checking link status .done
Port 0 Link Up - speed 10000 Mbps - full-duplex
USER8: main loop on lcore 0
USER8: -- lcoreid=0 portid=0 rxqueueid=0
nb ports 1 hz: 3690749763

I assume after this, i should be able to ping 2.2.2.2 of the NIC. However, it fails.

can you give any suggestion?

Thanks

Tcp connect failed when connection more then 16382

I simple create new socket and connect to another server. When tcp connection execeed 16380, the connect action would fail.
error message like bellow.

connect to server failed 
errno:105.
error:No buffer space available.
USER8: LCORE[2] Invalid parameters 

the message of dpdk-ans is:

USER8: LCORE[0] Get free tcp port failed 
USER8: LCORE[0] tcp_bind: malloc port failed 
USER8: LCORE[0] Get free tcp port failed 
USER8: LCORE[0] tcp_bind: malloc port failed 
USER8: LCORE[0] Get free tcp port failed 
USER8: LCORE[0] tcp_bind: malloc port failed 
USER8: LCORE[0] Get free tcp port failed 
USER8: LCORE[0] tcp_bind: malloc port failed 
USER8: LCORE[0] Get free tcp port failed 
USER8: LCORE[0] tcp_bind: malloc port failed 
USER8: LCORE[0] Close or shutdown tcp pcb failed, errno: 12 
USER8: LCORE[0] Close or shutdown tcp pcb failed, errno: 12 
USER8: LCORE[0] Close or shutdown tcp pcb failed, errno: 12 
USER8: LCORE[0] Close or shutdown tcp pcb failed, errno: 12 
USER8: LCORE[0] Close or shutdown tcp pcb failed, errno: 12 
USER8: LCORE[0] Close or shutdown tcp pcb failed, errno: 12 
USER8: LCORE[0] Close or shutdown tcp pcb failed, errno: 12 
USER8: LCORE[0] Close or shutdown tcp pcb failed, errno: 12 
USER8: LCORE[0] Close or shutdown tcp pcb failed, errno: 12 
USER8: LCORE[0] Close or shutdown tcp pcb failed, errno: 12 
USER8: LCORE[0] Close or shutdown tcp pcb failed, errno: 12 

version:
dpdk: dpdk-16.07
OS: centos6.8, Linut kenerl: 3.18.35
Compiler: gcc-4.47 or gcc-4.93
NIC: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network

Cannot open '/home/shuzilm/.rte_config' for rte_mem_config

./http_server
affinity to 0 core by default
EAL: Detected 4 lcore(s)
PANIC in rte_eal_config_attach():
Cannot open '/home/xxxxxxx/.rte_config' for rte_mem_config
9: [./https_server(_start+0x29) [0x408779]]
8: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x7ff33fdd5ac0]]
7: [./https_server(main+0x56) [0x408596]]
6: [./https_server(RunHttpsThread+0x18) [0x408988]]
5: [./https_server(ans_mod_init+0x2d7) [0x409077]]
4: [./https_server(anssock_init+0x182) [0x40b632]]
3: [./https_server(rte_eal_init+0xcc1) [0x41df41]]
2: [./https_server(__rte_panic+0xbe) [0x4084a6]]
1: [./https_server(rte_dump_stack+0x18) [0x425a78]]
Aborted (core dumped)

dpdk-ans is slower than regular Linux epoll with 100Gbit/s

I've been converting iperf3 to use DPDK instead of the Linux networking stack.
However when running it normally it can get ~40Gbit/s en when using the dpdk-ans version it can only achieve ~4Gbit/s.

The source code which uses regular epoll can be found here: https://github.com/JelteF/iperf/tree/epoll
And the code for the ANS version here: https://github.com/JelteF/iperf/tree/ans

Do you have any suggestions on how to improve the performance?

Build failed with latest DPDK v17

Build of 'ans' failed with following error due to API mismatch.

$ make
CC ans_kni.o
/home/upawar/projects/dpdk-ans/dpdk-ans/ans/ans_kni.c: In function ‘ans_kni_sendpkt_burst’:
/home/upawar/projects/dpdk-ans/dpdk-ans/ans/ans_kni.c:148:12: error: too few arguments to function ‘rte_ring_enqueue_bulk’
return rte_ring_enqueue_bulk(ring,(void **)mbufs,nb_mbufs);
^
In file included from /home/upawar/projects/dpdk-ans/dpdk-stable-17.05.1/x86_64-native-linuxapp-gcc/include/rte_mempool.h:78:0,
from /home/upawar/projects/dpdk-ans/dpdk-stable-17.05.1/x86_64-native-linuxapp-gcc/include/rte_mbuf.h:65,
from /home/upawar/projects/dpdk-ans/dpdk-stable-17.05.1/x86_64-native-linuxapp-gcc/include/rte_ether.h:52,
from /home/upawar/projects/dpdk-ans/dpdk-ans/ans/ans_kni.c:64:
/home/upawar/projects/dpdk-ans/dpdk-stable-17.05.1/x86_64-native-linuxapp-gcc/include/rte_ring.h:644:1: note: declared here
rte_ring_enqueue_bulk(struct rte_ring *r, void * const *obj_table,
^
/home/upawar/projects/dpdk-ans/dpdk-ans/ans/ans_kni.c: In function ‘kni_ring_to_kni’:
/home/upawar/projects/dpdk-ans/dpdk-ans/ans/ans_kni.c:325:13: error: too few arguments to function ‘rte_ring_dequeue_burst’
nb_rx = rte_ring_dequeue_burst(p->ring,(void **)&pkts_burst, PKT_BURST_SZ);
^
In file included from /home/upawar/projects/dpdk-ans/dpdk-stable-17.05.1/x86_64-native-linuxapp-gcc/include/rte_mempool.h:78:0,
from /home/upawar/projects/dpdk-ans/dpdk-stable-17.05.1/x86_64-native-linuxapp-gcc/include/rte_mbuf.h:65,
from /home/upawar/projects/dpdk-ans/dpdk-stable-17.05.1/x86_64-native-linuxapp-gcc/include/rte_ether.h:52,
from /home/upawar/projects/dpdk-ans/dpdk-ans/ans/ans_kni.c:64:
/home/upawar/projects/dpdk-ans/dpdk-stable-17.05.1/x86_64-native-linuxapp-gcc/include/rte_ring.h:1096:1: note: declared here
rte_ring_dequeue_burst(struct rte_ring *r, void **obj_table,
^
/home/upawar/projects/dpdk-ans/dpdk-stable-17.05.1/mk/internal/rte.compile-pre.mk:138: recipe for target 'ans_kni.o' failed
make[1]: *** [ans_kni.o] Error 1
/home/upawar/projects/dpdk-ans/dpdk-stable-17.05.1/mk/rte.extapp.mk:42: recipe for target 'all' failed
make: *** [all] Error 2

TCP problem on real server

I have 2 real server. I installed opendp, nginx to one with NIC

0000:01:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=em2 drv=ixgbe

I get IP 2.2.2.2 with opendp app. But I cannot ping it. After researching, I found that the reseason is checksum offload feature of NIC card. I fixed it with the following script:

cd dpdk-odp/opendp
sed -i -e 's/NETDP_HW_CHKSUM_DISABLE/NETDP_HW_CHKSUM_DISABLE/g' odp_main.c
make

OK, I can ping the server.
But, I cannot still complete tcp request to it. I monitored network packet using wireshark and found that the connection step is OK (three step handshake with 3 packets), after connection step, client send http request to nginx server, and the sending is OK, but server seemed not to handle the request ( or it handled but the request had been dropped before; or the response from nginx was dropped by lower modules like dpdk, opendp; or ...), so the client cannot get the response, and it retransmissed the request.

Can you help me about this problem?

PS: some pcap files with http request
with curl: https://drive.google.com/open?id=0B8RSLiwE-PAxZjE4VnRjT3ZfLUk
with telnet: https://drive.google.com/open?id=0B8RSLiwE-PAxQzlLNVRBSVRwdFU
https://drive.google.com/open?id=0B8RSLiwE-PAxejZrTEdmdlJuZ0E

No reference for rte_mempool_ops_table

I was running into the following issue when using dpdk-16.04. Is this not supported for that dpdk version?

gcc -o http_server http_server.o -O3 -I/root/dpdk-ans/librte_ans/include -I/root/dpdk-ans/librte_anssock/include /root/dpdk-ans/librte_anssock/librte_anssock.a -L/root/mtcp/dpdk-16.04/x86_64-native-linuxapp-gcc/lib -Wl,--whole-archive -Wl,-lrte_mbuf -Wl,-lrte_mempool -Wl,-lrte_ring -Wl,-lrte_eal -Wl,--no-whole-archive -Wl,-export-dynamic -lrt -pthread -ldl
/root/dpdk-ans/librte_anssock/librte_anssock.a(anssock_api.o): In function rte_mempool_ops_enqueue_bulk': anssock_api.c:(.text+0x5c): undefined reference torte_mempool_ops_table'
/root/dpdk-ans/librte_anssock/librte_anssock.a(anssock_api.o): In function anssock_alloc_datambuf': anssock_api.c:(.text+0x1e0): undefined reference torte_mempool_ops_table'
anssock_api.c:(.text+0x220): undefined reference to rte_mempool_ops_table' /root/dpdk-ans/librte_anssock/librte_anssock.a(anssock_api.o): In functionanssock_alloc_ctrlmbuf':
anssock_api.c:(.text+0x3f0): undefined reference to rte_mempool_ops_table' anssock_api.c:(.text+0x430): undefined reference torte_mempool_ops_table'
/root/dpdk-ans/librte_anssock/librte_anssock.a(anssock_api.o):anssock_api.c:(.text+0x59a): more undefined references to `rte_mempool_ops_table' follow
collect2: error: ld returned 1 exit status
make: *** [http_server] Error 1

/usr/bin/ld: /local/dpdk-ans/librte_ans/librte_ans.a(ans_init.o): unrecognized relocation (0x2a) in section `.text'

DPDK Ans compiling fails with the following error 👍

/usr/bin/ld: /local/dpdk-ans/librte_ans/librte_ans.a(ans_init.o): unrecognized relocation (0x2a) in section .text'`

Information about the system:

I tried both GCC versions :

`[root@node003 ans]# gcc --version
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11)
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

[root@node003 ans]# gcc --version
gcc (GCC) 6.3.0
Copyright (C) 2016 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.`

Kernel :

Linux node003 3.10.0-514.2.2.el7.x86_64 #1 SMP Tue Dec 6 23:06:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

Cpu :
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 15
Model name: Intel(R) Xeon(R) CPU 5160 @ 3.00GHz
Stepping: 6
CPU MHz: 2992.222
BogoMIPS: 5985.04
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K

Any idea on how to resolve this ?

unrecognized relocation (0x2a) in section `.text'

/usr/bin/ld: /home/shuzilm/workspace/dpdk-ans/librte_ans/librte_ans.a(ans_init.o): unrecognized relocation (0x2a) in section `.text'
/usr/bin/ld: final link failed: Bad value
collect2: error: ld returned 1 exit status
/home/shuzilm/workspace/dpdk-stable-16.11.1/mk/rte.app.mk:231: recipe for target 'ans' failed
make[1]: *** [ans] Error 1
/home/shuzilm/workspace/dpdk-stable-16.11.1/mk/rte.extapp.mk:42: recipe for target 'all' failed
make: *** [all] Error 2

Typo in librte_ans.a

There is reigster in there instead of register:

$ ack reigster librte_ans.a
USER8: LCORE[%d] ans_tcp_recv_cb: App didn't reigster EPOLLIN 
USER8: LCORE[%d] ans_udp_recv_cb: App didn't reigster EPOLLIN 
USER8: LCORE[%d] tcp close: App didn't reigster EPOLLIN 
USER8: LCORE[%d] ans_accept_cb: App didn't reigster EPOLLIN 

I could change it in the binary but it would probably be better to change in the source code directly.

Source Code

where can I find librte_ans source code ???

Correct place to look for the source code?

Were can one find the source to compile these object files inside librte_ans.a?

rw-r--r-- 0/0 8528 Dec 31 19:00 1969 ans_init.o
rw-r--r-- 0/0 14832 Dec 31 19:00 1969 ans_ring.o
rw-r--r-- 0/0 7664 Dec 31 19:00 1969 ans_conf.o
rw-r--r-- 0/0 6688 Dec 31 19:00 1969 ans_enet.o
rw-r--r-- 0/0 10448 Dec 31 19:00 1969 ans_enet_config.o
rw-r--r-- 0/0 3920 Dec 31 19:00 1969 ans_enet_subr.o
rw-r--r-- 0/0 6928 Dec 31 19:00 1969 ans_ip_in.o
rw-r--r-- 0/0 8808 Dec 31 19:00 1969 ans_ip_input.o
rw-r--r-- 0/0 13504 Dec 31 19:00 1969 ans_ip_route.o
rw-r--r-- 0/0 12520 Dec 31 19:00 1969 ans_ip_radix.o
rw-r--r-- 0/0 4848 Dec 31 19:00 1969 ans_ip_in_rmx.o
rw-r--r-- 0/0 16616 Dec 31 19:00 1969 ans_ip_config.o
rw-r--r-- 0/0 3176 Dec 31 19:00 1969 ans_ip_if.o
rw-r--r-- 0/0 20640 Dec 31 19:00 1969 ans_enet_arp.o
rw-r--r-- 0/0 7080 Dec 31 19:00 1969 ans_ip_output.o
rw-r--r-- 0/0 5552 Dec 31 19:00 1969 ans_ip_icmp.o
rw-r--r-- 0/0 3712 Dec 31 19:00 1969 ans_ip_reass.o
rw-r--r-- 0/0 13376 Dec 31 19:00 1969 ans_udp.o
rw-r--r-- 0/0 32576 Dec 31 19:00 1969 ans_socket.o
rw-r--r-- 0/0 33008 Dec 31 19:00 1969 ans_conn.o
rw-r--r-- 0/0 10008 Dec 31 19:00 1969 ans_socket_ring.o
rw-r--r-- 0/0 29680 Dec 31 19:00 1969 ans_tcp.o
rw-r--r-- 0/0 30984 Dec 31 19:00 1969 ans_tcp_input.o
rw-r--r-- 0/0 37040 Dec 31 19:00 1969 ans_tcp_output.o
rw-r--r-- 0/0 12864 Dec 31 19:00 1969 ans_tcp_table.o
rw-r--r-- 0/0 13520 Dec 31 19:00 1969 ans_epoll.o
rw-r--r-- 0/0 5120 Dec 31 19:00 1969 ans_tcp_debug.o
rw-r--r-- 0/0 8080 Dec 31 19:00 1969 ans_mgmt.o

Error message when hold more than 30000 sockets

When I create more than 30000 sockets to connect to another machine, the connect action is successful.
when I try to close all sockets, the ans process would output like this

Run ./build/app/ans -c 0x3 -n 1 -- -P -p 0 --config="(0,0,1)"

USER8: lcore 0 has nothing to do
USER8: LCORE[1] Close or shutdown tcp pcb failed, errno: 12 
USER8: LCORE[1] Close or shutdown tcp pcb failed, errno: 12 
USER8: LCORE[1] Close or shutdown tcp pcb failed, errno: 12 
USER8: LCORE[1] Close or shutdown tcp pcb failed, errno: 12 
USER8: LCORE[1] Close or shutdown tcp pcb failed, errno: 12 
USER8: LCORE[1] Close or shutdown tcp pcb failed, errno: 12 

When use multicore, output like this

./build/app/ans -c 0x1ff -n 1 -- -P -p 0 --config="(0,0,1),(0,1,2),(0,2,3),(0,3,4),(0,4,5)(0,5,6),(0,6,7),(0,7,8)"

USER8: LCORE[5] Getting tcp seg failed 
USER8: LCORE[5] tcp_create_segment: no memory.
USER8: LCORE[5] Close or shutdown tcp pcb failed, errno: 12

TSO support

The README and two commits (28a4dd5, 4fdcc65) mention that TSO is supported.
But you mentioned in #16 that it is not yet supported. Which of the two is the case?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.