performance of IPoIB with 200Gbps adapter

I’m new to the community and this is my first post. Here is my question.

We got some 200Gbps IB card and 200Gbps switch.

The IB adapters are installed in 2 servers, each has PCIeGen4 X16 and 1 amd CPU(NUMA not bothers)

We test the bandwidth with ib_send_bw --run_infinitely and observe the traffic with collectl -sX, 22GB/s is reached.

We setup the ipoib with connected mode and ipoib_enhanced is set to 0. We test the TCP performance with iperf3. The performance is about 25Gbps for single TCP connection. As we setup more TCP connections, the performance increased up to 80Gbps. And we find out ksoftirqd used up one CPU core at receive side server.

Analysis and tried:

  1. We use dstat --top-int and find out the interrupt mlx5_comp1 occurs 60k/s.
  2. We check /sys/class/net/ib0/queues/ and find out only one recv queue rx-0
  3. Adjust /sys/class/net/ib0/queues/rx-0/rps_cpus seems not works
  4. We try to set ipoib_enhanced=1 and ib0 in datagram mode, and iperf3 report packaget drops and the total speed is about 56Gbps.

So the questions is:

What should I tune the system to make iperf3 get performance of 200Gbps?

Hi,

Please open a support case at support@mellanox.com for further assistance.

Thanks,

Samer

Why hide the solution behind closed doors?

We’ve got Connect-X 3 adapter running IPOIB. Getting the same problem – ksoftirqd/0 is using 99% of CPU soft interrupts. Any way to parallelize that across several cores? I’ve tweaked some things according to the setup guide (power, stable freq, etc), but, again, we are basically hitting the ceiling with that ksoftirqd stuff on the receiving side. What’s the solution, guys? Getting just 30Gbps with iperf3. I know it can push 50, why not? Thanks!