Giter VIP home page Giter VIP logo

dpdkcap's Introduction

DPDKCap

DPDKCap is packet capture tool based on DPDK. It provides a multi-port, multi-core optimized capture with on the fly compression. Thus particularly suiting captures at very high speeds (more than 10Gpbs).

Build status

Branch Status
Master Build Status
Develop Build Status

1. Installation and platform configuration

1.1 Install DPDK

Please DPDK installation instruction, either from the DPDK quick start instructions or from your operating system specific Getting started guide.

1.2 Install dependencies

DPDKCap requires the following dependencies to be built:

  • libncurses-dev

1.3 Build and Install DPDKCap

To build DPDKCap, you first need to set RTE_SDK and RTE_TARGET.

$ export RTE_SDK=... # Replace by your DPDK install directory
$ export RTE_TARGET=x86_64-native-linuxapp-gcc # Replace by your target

To build DPDKCap, run the following command into DPDKCap root directory:

$ make

2. Usage

DPDKCap works as a standard DPDK application. Thus it needs Environment Abstraction Layer (EAL) arguments before dpdkcap specific ones:

# ./build/dpdkcap [EAL args] -- [dpdkcap args]

Check out the dpdk documentation for more information on EAL arguments. You will probably need the -l option for cores allocation and the --huge-dir one for providing huge pages directory.

To get a list of DPDKCap specific available options, run:

# ./build/dpdkcap [EAL args] -- --help

2.1 Selecting cores for capture

From the available ports detected by DPDK, you can select ports to capture by using the -p, --portmask option. This option takes as argument an hexadecimal mask whose bits represent each port. By default, DPDKCap uses only the first port (portmask=0x1).

For example, if you want to capture ports 1, 2 and 4, use: --portmask 0xb

2.2 Assigning tasks to lcores

DPDKCap assigns two different tasks to lcores:

  • Capturing cores enqueue packets from Ethernet ports queues into a main buffer. Each captured port must be assigned at least a core.
  • Writing cores extract packets from this buffer and write them into LZO compressed pcap capture files. Each writing core writes into a different file.

As a consequence, DPDKCap needs, at least, a single writing core and as many capturing cores as ports you want to capture. Finally, a last lcore must be kept to handle logs and statistics. However, depending on your traffic bandwidth and your system capabilities, you might need to use more cores.

The -c, --per_port_c_cores option allocates NB_CORES_PER_PORT capturing cores per selected port.

The -w, --num_w_cores option allocates a total of NB_CORES writing cores.

Note that the writing task requires more computational power than the capture one (due to compression), thus you will probably need to allow more writing cores than capture ones. This being said, size your storage system accordingly, as thousands cores could not achieve a full capture with a too low storage system bandwidth.

2.3 Limiting file size or duration

Depending on the data you want to capture, you might need to split the capture into several files. Two options are available to limit file size/duration:

  • The -G, --rotate_seconds option creates a new file every T seconds.
  • The -C, --limit_file_size option creates a new file when the current file size goes over the specified SIZE.

You can specify the output file template using the -o, --output option. This is necessary with the -G, --rotate_seconds option if you do not want to erase the same file again and again. See the following section.

2.4 Setting output template

The -o,--output let you provide a template for the output file. This template is formatted according to the following tokens:

  • %COREID this is replaced by the writing core id into the filename. This token is mandatory and will be automatically appended to the output file template if not present.

  • %FCOUNT this is replaced by a counter that allows distinguishing files created by the -C, --limit_file_size option. If this option is used, this token is mandatory and will be automatically appended to the output file template if not present.

  • Date strftime tokens. These tokens are replaced according to strftime standard. This date is updated every time the -G, --rotate_seconds option triggers a file change. These tokens are not mandatory with this option, but you might overwrite previously created files.

2.5 Other options

  • -s, --snaplen limits the packet capture to LENGTH bytes.
  • -S, --statistics prints a set of statistics while the capture is running.
  • --logs output logs into the specified file instead of stderr.
  • --no-compression disables the LZO compression. This is not advised, as it greatly increase the disk I/O. It can however be used for capturing low speed traffic.
  • -m, --num_mbufs changes the number of memory buffers used by dpdkcap. Note that the default value might not work in your situation (mbufs pool allocation failure at startup or RX mbufs allocation failures while running). Optimal values (in term of memory usage) are powers of 2 minus one (n=2^q-1).
  • -d, --rx_desc allow you to fix the number of RX descriptors per queue used. This value can be fixed in a per port fashion. The following formats are available:
    • A single integer value: fixes the given number of RX descriptors for all ports.

    • A list of key-values, assigning a value to the given port id, following this format:

      <matrix>   := <key>.<nb_rx_desc> { "," <key>.<nb_rx_desc> "," ...}
      <key>      := { <interval> | <port> }
      <interval> := <lower_port> "-" <upper_port>
      

      Examples:

      512               - all ports have 512 RX desc per queue
      0.256, 1.512      - port 0 has 256 RX desc per queue,
                          port 1 has 512 RX desc per queue
      0-2.256, 3.1024   - ports 0, 1 and 2 have 256 RX desc per queue,
                          port 3 has 1024 RX desc per queue
      

3. Troubleshooting

Here is a list of common issues and how to solve them:

  • Mbufs pool allocation failure: try to reduce the number of memory buffers used with the -m, -num_mbufs option.
  • Mbufs allocation failures (while running): try to raise the number of memory buffers used with the -m, -num_mbufs option.
  • Problems with with RX queues configuration: the default number of RX descriptors configured might be too high for your interface. Try to change the number of RX descriptors used with the -d, --rx_desc option.

4. Software License Agreements

DPDKCap is distributed under the BSD License, see LICENSE.txt.

dpdkcap's People

Contributors

groud avatar kunschikov avatar woutifier avatar xaki23 avatar yangye-huaizhou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dpdkcap's Issues

wire rate performance?

Not sure if something is not correct but capturing about 800K pps (average 400-500 bytes) will need 16 cores even that packet is missing - is there a rule of thumb how many cores you need for average packet capture and writing?

Core dump

The output is as follow:

DPDKCAP: Using 1 ports to listen on
DPDKCAP: Using 3 cores out of 3 allocated
DPDKCAP: Port 0: MAC=e8:61:1f:16:cc:5d, RXdesc/queue=512
DPDKCAP: Core 2 is capturing packets for port 0
DPDKCAP: Core 1 is writing using file template: output_%COREID.pcap.
Segmentation fault (core dumped)

Packet filtering

It would be very powerful if we could filter the packets that are saved to disk, similar to how tcpdump can filter packets with Berkeley Packet Filters, including filtering based on Port, MAC, VLAN, src/dst IP address/subnet, IP Protocol (TCP/UDP/ICMP/etc.) and TCP/UDP src/dst Port. Packets matching the filter would be saved to disk; all packets that did not match the filter would be discarded.

This may be tough to implement, so this feature request thread may be more of a place for us to discuss how this might be implemented. However, there may be some features within the DPDK API that already support this to some extent, but I haven't found them yet.

There is a long thread discussing this at dpdk.org in December 2015, but there was no resolution to it:
Thread list: [dpdk-dev] tcpdump support in DPDK 2.3
Parent message: [dpdk-dev] tcpdump support in DPDK 2.3

Your thoughts?

Snaplen not working

I did little mistake in the code update, the snaplen option is not apliyed anymore and the whole packet content is copyed into the pcap file.

I have to fix this :)

lzop is claiming that the lzo files that dpdkcap creates are corrupt

I've been trying to have dpdkcap write out 25MB files, and when I try to use lzop to decompress them, only about 100KB (or less) of pcap is written to disk and then lzop errors out saying "lzop: output_01_000.pcap.lzo: lzop file corrupted". Data rate for the capture was relatively low, less than 10mbit/sec. Any ideas?

I'm using the latest from dpdkcap's master branch (as of Sept 2016), with DPDK 16.11 on Ubuntu 16.04.1 (64-bit). Command line:
sudo -E ./build/dpdkcap --master-lcore 0 -w 00:08.0 -c 0x7 -n 1 -- --limit_file_size=25000000 -p 0x1

Capturing Outgoing packets

Hi,

I want to capture outgoing packets rather than incoming packets. Is there a way to do it? Particulalry, I want to know when my packet was actually sent on the wire, and I need to look at the send time in the packet capture dump.

Stats mapping issue

Hi,

I met a problem using dpdkcap, can you help me ? The error I met was as follow. My operating system is ubuntu 12.04, the version of dpdk is 16.07, and ethernet controller is Intel Corporation I350 Gigabit Network Connection.

I found rte_eth_dev_set_rx_queue_stats_mapping (port, q, q) in port_init function return -95, which was caused by struct rte_eth_dev did not support queue_stats_mapping_set function.

EAL: Detected 64 lcore(s)
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device 0000:16:00.0 on NUMA socket 0
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:16:00.1 on NUMA socket 0
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:16:00.2 on NUMA socket 0
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:16:00.3 on NUMA socket 0
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:46:00.0 on NUMA socket 1
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:46:00.1 on NUMA socket 1
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:46:00.2 on NUMA socket 1
EAL: probe driver: 8086:1521 rte_igb_pmd
EAL: PCI device 0000:46:00.3 on NUMA socket 1
EAL: probe driver: 8086:1521 rte_igb_pmd
DPDKCAP: Using 1 ports to listen on
DPDKCAP: Using 3 cores out of 8 allocated
EAL: Error - exiting with code: 1
Cause: Cannot init port 1
DPDKCAP: Core 1 is writing using file template: /home/rx_%COREID.pcap.lzo.

Best regards,
Xu

ncurses.h: No such file or directory

When i run "make" command in Ubuntu 16.04 following error appears:
root@ubuntu:~/dpdkcap# make
CC src/dpdkcap.o
CC src/core_write.o
CC src/core_capture.o
CC src/statistics_ncurses.o
/home/sub/dpdkcap/src/statistics_ncurses.c:4:21: fatal error: ncurses.h: No such file or directory
compilation terminated.
/home/sub/dpdk-16.07//mk/internal/rte.compile-pre.mk:138: recipe for target 'src/statistics_ncurses.o' failed
make[1]: *** [src/statistics_ncurses.o] Error 1
/home/sub/dpdk-16.07//mk/rte.extapp.mk:42: recipe for target 'all' failed
make: *** [all] Error 2

Dynamically fix the nb_rx_desc used

Use the rte_eth_dev_info to fix the number of used rx_descriptors to the maximum possible.
Maybe add a program option to change this default behaviour.

How to multi port Pcap capture?

I can 0x01 port Pcap Capture, but can't 0x02 pcap capture.

how to 0x01, 0x02, 0x0... port pcap capture?

commad is
./dpdkcap --proc-type=primary -m 4095 -c 0xf -n 1 -- -p 0x1 -o $output_path/test --no-compression
==> Capture Start OK,
DPDKCAP: Port 0: MAC:xx:xx:xx:xx:xx:xx, Rxdsc/queue=512
DPDKCAP: Core 2 is capturing packet for port 0
DPDKCAP: Waiting for all cores to exit

./dpdkcap --proc-type=secondary -m 4095 -c 0xf -n 1 -- -p 0x2 -o $output_path/test2 --no-ompression
==>ERROR:
EAL: Error - exiting with code: 1
Cause: Cannot create mbuf pool

cat /etc/meminfo | grep Huge
AnonHugePages: 29001728 KB
HugePages_Total: 8196
HugePages_Free: 6147
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 KB

how to multi port capture?
dpdk-pdump
use command: ./dpdk-pdump -l 3,4,5 -- --multi --pdump 'port=0,queue=,rx-dev=/tmp/rx-1.pcap' --pdump 'port=1,queue=,rx-dev=/tmp/rx-2.pcap'

I wan file split to time and File Size...

help

Mbuf Allocation failures

Hi ,
When I run dpdkcap,it can't capture packages and the result is as follows, I don't know the reason why "Mbuf Allocation failures", do you know what's wrong with it?
Thank you very much.
clipboard

Indexing data

Hi,
Could we can apply bitmap indexing with lzo compression for searching the packets among may pcap files quickly?

Thank you!

Compatible with vmxnet3?

Hi, I follow the instruction to install DPDK(17.05) and dpdkcap(develop branch), could not start dpdkcap normally, after I turn on the DEBUG on $RTE_TARGET/.config and --log-level 8, hear is the error message.
I saw the Device activation: UNSUCCESSFUL so I am wondering does it work on vmxnet3(ESXi 5.5)? thanks.
#sudo build/dpdkcap -c 0x07 -n 2 --log-level 8
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 0 on socket 0
EAL: Detected lcore 3 as core 1 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 4 lcore(s)
Skipping unuse nic message
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL: probe driver: 15ad:7b0 net_vmxnet3
EAL: PCI memory mapped at 0x7f729d600000
EAL: PCI memory mapped at 0x7f729d601000
EAL: PCI memory mapped at 0x7f729d602000
PMD: eth_vmxnet3_dev_init(): >>
PMD: eth_vmxnet3_dev_init(): Hardware version : 1
PMD: eth_vmxnet3_dev_init(): Using device version 1

PMD: eth_vmxnet3_dev_init(): UPT hardware version : 1
PMD: eth_vmxnet3_dev_init(): MAC Address : 00:0c:29:15:d8:1c
DPDKCAP: Using 1 ports to listen on
DPDKCAP: Using 3 cores out of 3 allocated
PMD: vmxnet3_dev_configure(): >>
PMD: vmxnet3_dev_rx_queue_setup(): >>
DPDKCAP: rte_eth_dev_set_rx_queue_stats_mapping(...): Operation not supported
DPDKCAP: The queues statistics mapping failed. The displayed queue statistics are thus unreliable.
DPDKCAP: Port 0: MAC=00:0c:29:15:d8:1c, RXdesc/queue=512
DPDKCAP: Core 2 is capturing packets for port 0
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_dev_start(): >>
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_recv_pkts(): Rx queue is stopped.
PMD: vmxnet3_write_mac(): Writing MAC Address : 00:0c:29:15:d8:1c
PMD: vmxnet3_dev_start(): Device activation: UNSUCCESSFUL
EAL: Error - exiting with code: 1
Cause: Cannot start port 0

README update required

the readme is pretty badly out of date by now.

  • build status
  • build process
  • features/options
  • tuning

Packets received but none written

I'm running a test application and capturing packets with the "-S" option enabled. The stats say "RX successful packets: 227" and "RX successful bytes: 13.30 KB", but the per core writing stats say 0 bytes written. And the output file is empty except for the header. What could be causing this to happen?

This was printed before, if it's relevant:

DPDKCAP: rte_eth_dev_set_rx_queue_stats_mapping(...): Function not implemented
DPDKCAP: The queues statistics mapping failed. The displayed queue statistics are thus unreliable.

DPDK version: dpdk-stable-16.07.1
RTE_target: x86_64-native-linuxapp-gcc
Started with ./build/dpdkcap -c 0x0000f0000 --socket-mem 128,4096 -w 0000:83:00.0 -w 0000:83:00.1 -- --no-compression -S -G 10

Statistics not refreshing/calculating on the correct time interval

I was noticing that the live/instant statistics were off for me by a reliable amount (10% on a bare metal machine, and 25% on a virtual machine), and I think I've discovered why. When the timer is setup in statistics_ncurses.c, on line 257/258, the "ticks" value passed to rte_timer_reset() assumes a frequency of 2000000ULL (in thousands of ticks), but this is not always the correct value for every computer. Recommend using rte_get_timer_hz() in rte_cycles.h to calculate the correct number of ticks:

In statistics_ncurses.c, add:
#include <rte_cycles.h>
in the start_stats_display() function:
uint64_t ticks = rte_get_timer_hz() * STATS_PERIOD_MS / 1000;
rte_timer_reset(&(stats_timer), ticks, PERIODICAL, lcore_id, (void*) printscreen, data);

Segmentation fault when starting dpdkcap on 17.05

When I start dpdkcap in 17.05 it makes a segmentation fault.

./build/app/dpdkcap -c 0x7

EAL: Detected 32 lcore(s)
EAL: Probing VFIO support...
EAL: PCI device 0000:09:00.0 on NUMA socket 0
EAL: probe driver: 8086:1533 net_e1000_igb
EAL: PCI device 0000:0a:00.0 on NUMA socket 0
EAL: probe driver: 8086:1533 net_e1000_igb
DPDKCAP: Using 1 ports to listen on
DPDKCAP: Using 3 cores out of 3 allocated
DPDKCAP: rte_eth_dev_set_rx_queue_stats_mapping(...): Operation not supported
DPDKCAP: The queues statistics mapping failed. The displayed queue statistics are thus unreliable.
DPDKCAP: Port 0: MAC=0c:c4:7a:db:86:e8, RXdesc/queue=512
DPDKCAP: Core 2 is capturing packets for port 0
DPDKCAP: Core 1 is writing using file template: output_%COREID.pcap.lzo.
Segmentation fault

It seems to be an error in 17.05 as the it works fine with 17.02.

Am I doing something wrong?

refactoring some code (stats, writers)

there are currently two situations where code invariants exist that are both "not great":

  • stats modes ansi // ncurses is a compile time option (by editing the makefile in undocumented ways)
  • writer compressed/plain (and the lost pcap/pcapng) is an ifelse fest.

i would like to change both into runtime-options.
after talking with my personal C guru (C is not my native language), the current rough plan is ...

  • start with the stats problem since it is the less complex one.
  • add a struct statsmode that has a name and the handler function pointer(s).
  • add that struct for all compiled-in stats modes (keep ncurses compile-time optional!) to some stats-modes-avail list via eal init hooks.
  • add a rte_log stats mode that does NOT use human readable numbers (because if you are feeding stats to f.ex. grafana, once you hit the TB range for queue bytes, at %.2f that is not really high fidelity stats)
  • (very optional) a direct stats feed to carbon/grafana.

going to ask @Woutifier and @groud for review of actual commits, but feedback/suggestions already welcome by all interested parties.

error: expected ‘;’ before ‘val’

I'm using DPDK 16.04 and Ubuntu 16.04. Whey i run make command, followin error appears:

/opt/dpdk-16.04/build/include/rte_pci.h:262:22: error: expected ‘;’ before ‘val’
  (fd) = (typeof (fd))val;                                \
                      ^

Error: No port available.

Hi, I'm trying to run dpdkcap but when I execute this command

sudo ./build/dpdkcap -c 0x0f -n 2 -- -c 1 -w 2 I get the following error:

EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: Error - exiting with code: 1
  Cause: Error: No port available.

Any ideas?
DPDK Version - 17.02.1

Network devices using DPDK-compatible driver

============================================
0000:01:00.0 'I210 Gigabit Network Connection' drv=igb_uio unused=igb

Network devices using kernel driver

===================================
0000:02:00.0 'I210 Gigabit Network Connection' if=enp2s0 drv=igb unused=igb_uio Active
0000:03:00.0 'I210 Gigabit Network Connection' if=enp3s0 drv=igb unused=igb_uio
0000:04:00.0 'QCA986x/988x 802.11ac Wireless Network Adapter' if=wlp4s0 drv=ath10k_pci unused=igb_uio

Cannot create mbuf pool

Hi,
I have used dpdkcap-develop to capture packages, and it works OK, but now it can't work, and the error is as below:
[root@localhost dpdkcap-develop]# ./build/dpdkcap -c 0xf -n 2 -- -p 0x1 --statistics
EAL: Detected 4 lcore(s)
EAL: Probing VFIO support...
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL: probe driver: 15ad:7b0 net_vmxnet3
EAL: PCI device 0000:0b:00.0 on NUMA socket -1
EAL: probe driver: 15ad:7b0 net_vmxnet3
DPDKCAP: Using 1 ports to listen on
DPDKCAP: Using 3 cores out of 4 allocated
EAL: Error - exiting with code: 1
Cause: Cannot create mbuf pool

I just add memory of my virtual machine , I don't know whether it cause the error, do you have any idea?
Thank you very much.

Segmentation fault on rte_eth_dev_start function of 18.11

I got a segmentation fault based on DPDK18.11 using develop branch. And I found dpdkcap crashed on rte_eth_dev_start function in main by my preliminary debugging by adding RTE_LOG at line 597 and 601.
The steps are as following,

  1. cd /path/of/dpdk-18.11
  2. export RTE_SDK and RTE_TARGET
  3. ./usertools/dpdk-setup.sh
  4. choose [15] x86_64-native-linuxapp-gcc, [18] Insert IGB UIO module, [21] Setup hugepage mappings for non-NUMA systems setting 2048, and [24] Bind Ethernet/Crypto device to IGB UIO module setting 0000:82:00.0, [35]
  5. cd /path/to/dpdkcap
  6. make getting 4 warning about ‘rte_eth_dev_count’ is deprecated
  7. sudo -E ./build/dpdkcap -c 0xff00 -n 4

EAL: Detected 32 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL: probe driver: 8086:1533 net_e1000_igb
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL: probe driver: 8086:1533 net_e1000_igb
EAL: PCI device 0000:82:00.0 on NUMA socket 1
EAL: probe driver: 8086:10fb net_ixgbe
DPDKCAP: Using 1 ports to listen on
DPDKCAP: Using 3 cores out of 8 allocated
DPDKCAP: Port 0: MAC=00:1b:21:7e:4d:06, RXdesc/queue=512
DPDKCAP: Core 9 is writing using file template: output_%COREID.pcap.lzo.
DPDKCAP: Line 597 Warning: portId 0, for debug 6666666666666666666666666666666.
Segmentation fault

Any suggestion?

EAL Initializing port 0

Initializing port 0... EAL: Error - exiting with code: 1
Cause: rte_eth_tx_queue_setup:err=-22, port=0, queueid: 0

Cannot init port 0

After following the entire README, when I try to launch the program I get the following:

$ sudo ./build/dpdkcap -c 0x07 -n 8 -- --portmask 0x1

EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device 0000:01:00.0 on NUMA socket -1
EAL: probe driver: 8086:10ec rte_ixgbe_pmd
EAL: PCI device 0000:01:00.1 on NUMA socket -1
EAL: probe driver: 8086:10ec rte_ixgbe_pmd
EAL: PCI device 0000:02:00.0 on NUMA socket -1
EAL: probe driver: 8086:10ec rte_ixgbe_pmd
EAL: PCI device 0000:02:00.1 on NUMA socket -1
EAL: probe driver: 8086:10ec rte_ixgbe_pmd
DPDKCAP: Using 1 ports to listen on
DPDKCAP: Using 3 cores out of 3 allocated
EAL: Error - exiting with code: 1
Cause: Cannot init port 0
DPDKCAP: Core 1 is writing using file template: output_%COREID.pcap.lzo.

Thanks for your help.

rte_eth_tx_queue_setup(...) returned with error code -22

Some months ago I used DPDKCap, but today I tried to compile and use it again but I met this problem:

$ sudo ./build/dpdkcap -c 0x007 -n 2 -- -c 1 -w 1 -p 0x1

EAL: Detected 16 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL: probe driver: 8086:1528 rte_ixgbe_pmd
EAL: PCI device 0000:05:00.1 on NUMA socket 0
EAL: probe driver: 8086:1528 rte_ixgbe_pmd
EAL: PCI device 0000:83:00.0 on NUMA socket 1
EAL: probe driver: 8086:10c9 rte_igb_pmd
EAL: PCI device 0000:83:00.1 on NUMA socket 1
EAL: probe driver: 8086:10c9 rte_igb_pmd
EAL: PCI device 0000:84:00.0 on NUMA socket 1
EAL: probe driver: 8086:10c9 rte_igb_pmd
EAL: PCI device 0000:84:00.1 on NUMA socket 1
EAL: probe driver: 8086:10c9 rte_igb_pmd
EAL: PCI device 0000:85:00.0 on NUMA socket 1
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI device 0000:85:00.1 on NUMA socket 1
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
DPDKCAP: Using 1 ports to listen on
DPDKCAP: Using 3 cores out of 3 allocated
PMD: ixgbe_dev_tx_queue_setup(): tx_rs_thresh must be less than the number of TX descriptors minus 2. (tx_rs_thresh=32 port=0 queue=0)
DPDKCAP: rte_eth_tx_queue_setup(...) returned with error code -22
EAL: Error - exiting with code: 1
Cause: Cannot init port 0

What might be the problem? Thanks

Bypass mode protocol restore

Hello, I want to implement protocol restore based on NIC bypass mode. For example, if you want to restore the user's http request, smtp mail content, you have no tcp ip reorganization and restore of this code, so I can implement other protocol analysis and extract the desired content. But I hope that traffic above 10g can not be lost. I hope to give some advice, thank you.

RX packet missed

Hello,

I am having missed packets when I am capturing a flow of packets (1GB/s with 1024 byte each packet). I am using Ubuntu 18.04 and DPDK 16.11.

The PC is writing on a hard disk that has a writing speed of 90 Mb/s . The hugepages used are 4GB with 2MB of size. The RX file descriptors are modified to 4096 (maximum value). The Mbuf is used by its default value.

What should I do in order not to have have missed packets?

Thank you

Question - Splitting capture files based on IP/ports

Hi there,

I'm having trouble getting DPDK working in any capacity (I suspect it's a hugetlb problem)

But I'm trying to verify if this tool is even sufficient for my needs, one being that I can split the capture (pcap?) into different files or even directories to allow for parrallel playback/processing.

Thanks

Abnormal when performance test with 6G bandwidth

Hi,
I met an abnormal presentation when performance test with 6G bandwidth. I use dpdp-stable-16.11.8 and development branch of dpdkcap with "sudo ./build/dpdkcap -c 0xff00ff00 -n 4 -- -c 1 -w 14 -S -o /ssd/rx_%COREID -m 1048575 -d 4096 --log=./0302.log --no-compression" command line.
At the first 22G of stream, everything is fine. After this point, the usage of all write cpu becomes from 100% to 10% and number of free mbuf drops dramatically from 1048575 to 9848.
Th /ssd is an nvme ssd disk through pcie slot. The speed tested by hdparm is as follow.
sudo hdparm -t /dev/nvme0n1
/dev/nvme0n1:
Timing buffered disk reads: 3148 MB in 3.00 seconds = 1048.92 MB/sec

6G-dpdkcap-test.zip

Is there any suggestions?

Unable to compile via make: ETHER_MAX_LEN

error: ‘ETHER_MAX_LEN’ undeclared here (not in a function); did you mean ‘RTE_ETHER_MAX_LEN’

How do I fix this?
Using 18.11.5-0ubuntu0.19.04.1
And target as x86_64-native-linux-gcc

Filter vlan packet failed

Hello,
I have set a bpf task like "tcp", then run the dpdkcap, but after the packet replay, I only got tcp packet without vlan.
Are there any options for vlan compatibility?
or i have to add RTE_ETH_RX_OFFLOAD_VLAN_STRIP and RTE_ETH_RX_OFFLOAD_QINQ_STRIP to remove vlan?

Thanks.

error in compiling dpdk 17.05

Try dpdk 17.05:

make

CC src/dpdkcap.o
/home/gc/dpdkcap/src/dpdkcap.c: In function ‘main’:
/home/gc/dpdkcap/src/dpdkcap.c:416:3: warning: ‘rte_set_log_type’ is deprecated (declared at /usr/local/share/dpdk/x86_64-native-linuxapp-gcc/include/rte_log.h:169) [-Wdeprecated-declarations]
rte_set_log_type(RTE_LOGTYPE_DPDKCAP, 1);
^
/home/gc/dpdkcap/src/dpdkcap.c:417:3: warning: ‘rte_set_log_level’ is deprecated (declared at /usr/local/share/dpdk/x86_64-native-linuxapp-gcc/include/rte_log.h:144) [-Wdeprecated-declarations]
rte_set_log_level(RTE_LOG_DEBUG);
^
CC src/core_write.o
/home/gc/dpdkcap/src/core_write.c: In function ‘write_core’:
/home/gc/dpdkcap/src/core_write.c:231:9: error: too few arguments to function ‘rte_ring_dequeue_bulk’
DPDKCAP_WRITE_BURST_SIZE);
^
compilation terminated due to -Wfatal-errors.
make[1]: *** [src/core_write.o] Error 1
make: *** [all] Error 2

17.05 apparently added another argument

argument parsing sometimes fails when providing portmask

When I include the -p option, sometimes an error is generated by the parse_opt(int key, char* arg, struct argp_state *state) function in dpdkcap.c. I think this has to do with the global variable errno being a non-zero value before line 81: arguments->portmask = strtoul(arg, &end, 16); and the strtoul function is not required to set errno to 0 on success. So even if the call to strtoul is successful, errno can still be a non-zero value afterwards, causing the conditional on line 82/83 to think there is an error condition. Therefore, I recommend setting errno to 0 right before the call to strtoul.

Recommendation: insert this line before line 81 in dpdkcap.c (on develop branch):
errno = 0; // reset errno because it is not set to 0 by strtoul on success

Works but seeing issue

Hi,
I am having 2 servers A(process a) - B(process b) where both side DPDK is enabled.
I have udp traffic running (50Kbps or 50Mbps).

What I observe is when the start capturing using dpdkcap :
dpdkcap -l 40,42,44,46 -n 4 -- -p 0x1 -w /home/cu1/nmurshed/test4

I get the packet captured, however.. process b on server B stops receiving any packets.
Is it like the packets from DPDK are diverted towards dpdkcap ? Won't both process get the packet ?
Process b doesn't get any packet even if I stop dpdkcap.

has anyone faced this ?

Problem with rte_eth_dev_set_rx_queue_stats_mapping

When a Ethernet device does not handle multiples queues, the rte_eth_dev_set_rx_queue_stats_mapping usually does not work.
To make the statistics work, rte_eth_stats_get should be used instead.

Maybe it could be interesting to add those stats on the capture side of the statistics, and disable the per queue one if not handled.

Read multiple mbuf for a single packet (SEGFAULTs)

When having huge packets, DPDKCap sometimes segfaults.
I checked this:
http://dpdk.org/doc/guides/prog_guide/mbuf_lib.html#mbuf-library

Within DPDKCap, only the first MBUF is dequed thanks to rte_pktmbuf_mtod. Then the calculated length is get using rte_pktmbuf_pkt_len. As this is used as the copy size, but it might be bigger that the MBUF size, and the provoke SEGFAULTs.

I think a loop should be applied to write all the MBUF for a single packet, the mbuf size can be determined by pkt_mbuf_datalen().

Possible overflows of packet and byte counters for large amounts of data

In several places in the code, 32-bit long or unsigned long is used to keep track of packet or byte counts. For high-speed networks, it's easy to get more than 2G/4G of bytes or even packets. I'd recommend changing the types of these variables to uint64_t (this type is used in the struct rte_eth_stats provided by DPDK and used by statistics_ncurses.c).

Here are a few of these variables that I noticed (but this may not be all of them):

statistics_ncurses.c:
function wwrite_stats():
static long last_total_packets = 0, last_total_bytes = 0, last_total_compressedbytes = 0; long total_packets = 0, total_bytes = 0, total_compressedbytes = 0; long instant_packets, instant_bytes, instant_compressedbytes;
function wcapture_stats()
static unsigned long * last_per_port_packets = NULL;

core_capture.h
struct core_capture_stats unsigned long packets; //Packets successfully enqueued unsigned long missed_packets; //Packets core could not enqueue

core_write.h
struct core_write_config unsigned long file_size_limit;
struct core_write_stats unsigned long current_file_packets; unsigned long current_file_bytes; unsigned long current_file_compressed_bytes; unsigned long packets; unsigned long bytes; unsigned long compressed_bytes;

dpdkcap.c
struct arguments unsigned int file_size_limit;

(sorry for the poor formatting!)

Oh, and if you make this change, don't forget to change the malloc size in statistics_ncurses.c line 35:
malloc(sizeof(unsigned long) * data->cores_capture_stats_list_size)
should be
malloc(sizeof(uint64_t) * data->cores_capture_stats_list_size)

Using only one mbuf_pool

It seems that only a single memory pool is created for all capture cores. I am not sure if it is the best way to do so.

For the moment, the mempool is attached to the main socket using rte_socket_id(), maybe we should use SOCKET_ID_ANY instead ?

It is not clear what this means (should we assign a per lcore mempool ?) :

Note: the mempool implementation is not preemptable. A lcore must not be interrupted by another task that uses the same mempool (because it uses a ring which is not preemptable).
(http://dpdk.org/doc/api/rte__mempool_8h.html)

Does anyone has a answer ? @Woutifier ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.