Sniff RoCE traffic using tcpdump

I’m trying to sniff RoCE traffic using tcpdump with our ConnectX-5 adapter.

However, running the following command returns the output:

mariano@nslrack02:~$ sudo tcpdump -i mlx5_0

tcpdump: mlx5_0: No such device exists

(SIOCGIFHWADDR: No such device)

I read in the docs that I must install libpcap >= 1.9, tcpdump >= 4.9.3 and OFED >= 5.1.

This is my environment:

mariano@nslrack02:~$ tcpdump --version

tcpdump version 4.9.3

libpcap version 1.10.0 (with TPACKET_V3)

OpenSSL 1.1.1 11 Sep 2018

mariano@nslrack02:~$ ofed_info

MLNX_OFED_LINUX-5.3-1.0.0.1 (OFED-5.3-1.0.0):

OS is Ubuntu 18.04 with Linux kernel: 5.4.0-74.

So everything seems to the right version but it does not work. What am I missing? Thanks

HI Mariano,

We removed support for Offloaded Traffic Sniffer feature, but we can suggest using the docker-container solution to use RDMA devices, in order to capture and analyze RDMA packets using tcpdump.

Requirements

  • CentOS/RHEL 7.x / 8.x

  • Upstream Kernel must be higher than 4.9

​- ConnectX-3/4/5

Installation instructions:

  1. Install OS that is compatible with kernel 4.9 and above

  2. Install Upstream kernel starting from version 4.9 support sniffing RDMA(RoCE) traffic

  3. Yum install docker

  4. Docker pull mellanox/tcpdump-rdma

  5. Service docker start

  6. Docker run -it -v /dev/infiniband:/dev/infiniband -v /tmp/traces:/tmp/traces --net=host --privileged mellanox/tcpdump-rdma bash

  7. Install MFT 4.9

  8. Install perftest package from MLNX_OFED RPMS directory

  9. Capture RoCE packets with the following:

tcpdump -i mlx5_0 -s 0 -w /tmp/traces/capture1.pcap

or

tcpdump -i mlx4_0 -s 0 -w /tmp/traces/capture1.pcap​

  1. Run ib_write_bw test , as below:

Server : ib_write_bw -d mlx5_0 -a -F

Client: ib_write_bw -a -F <Server_ip>

  1. Open the pcap through wireshark to verify

Thanks,

Samer

Hi Samer,

thanks for the answer. As far as I understood, this is a tcpdump limitation. However, I also want to sniff RoCE traffic with libpcap in a C application.

Do I still need to use the Mellanox Docker container?

Is there any chance to install the same libraries or packages that are inside the container into my host?

Thanks.

Mariano,

I too have been running into the same error as you. While Samer’s statement about removing the Offload Traffic Sniffer holds true (meaning you can’t enable or disable the sniffer through ethtool anymore), RMDA sniffer support was introduced into libpcap version 1.9.0 or newer and shouldn’t rely on that. My question to you is are you using the Ubuntu distribution’s repo tcpdump package or have you installed them from somewhere else manually?

In my scenario, I am using a similar config to you, except I am using Ubuntu 20.04 and the following tcpdump --version:

tcpdump version 4.9.3

libpcap version 1.9.1 (with TPACKET_V3)

OpenSSL 1.1.1f 31 Mar 2020

I get the same output:

user@remote2:/mnt/nfsrdma$ sudo tcpdump -i mlx5_1

tcpdump: mlx5_1: No such device exists

(SIOCGIFHWADDR: No such device)

What is making me scratch my head is when I download the exact same versions in source code form from the tcpdump.org site (tcpdump 4.9.3 and libpcap 1.9.1), compile them, then ‘make install’ them, it works just fine:

user@remote2:/mnt/nfsrdma$ sudo tcpdump -i mlx5_1

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

listening on mlx5_1, link-type EN10MB (Ethernet), capture size 262144 bytes

Note: if you don’t ‘apt remove tcpdump libpcap0.8’ your path may still be pointing to the distro install. The compiled install path is /usr/local/sbin/.

The only thing that I can think of is that the OFED install provided the Mellanox specific hw support information that was pulled into the compile process, which the distro package does not natively include. I would prefer to have a recommendation to my customers that doesn’t include compiling code, so if anyone else finds and shares an easier package manager alternative to compiling code I would appreciate it!

Dear James,

I also got this problem and thank you so much for sharing your experience.