Slow iperf speed (4gbit) between 2 mellanox connectx 3 vpi cards with 40gps link

Hello,

I have 2 mellanox connectx 3 vpi cards. They have been updated to latest firmware and installed one on centos7 and the other on windows 10.

Both cards are on amd threadripper systems with pcie express gen4/3 at 8x or 16x.

On windows I use the latest WinOF drivers.

On linux I use the inbox drives because I can’t compile the ofed drivers for the 5.10 kernel.

The link between the two cards is 40gps, tried on infiniband and ethernet mode.

Problem is, even if the link is rated at 40gbs, real speed is 4gbs.

What could be the problem?

Thank you

I had similar issues which are partly resolved but can’t seem to get it reliable with Windows in the mix.

-make sure both are running Iperf2 (not 3)

-Run with multiple threads (my setup maxes out at ~16-19 gbps though on a single thread)

-increase window size to 128M

-enable jumbo frames on both (eg 9000)

-if you have a switch in between it needs to be set at least as high as the setting on the nodes. (eg 9014)

In addition to Ruben’s suggestions:

    • Most important & as a preliminary request - you need to set “performance fine-tuning” on both Windows & Linux OS, that includes registry, RSS, hw & fw tunings, to make sure the adapters use efficiently the CPU numa/cores
  • Guidance on this can be found in the relevant User-Manuals

https://www.mellanox.com/products/adapter-software/ethernet/windows/winof-2

https://www.mellanox.com/products/infiniband-drivers/linux/mlnx_ofed

  • Iperf" is more of Linux oriented test tool, while in Windows it is “NTttcp” test tool that is the equivalent Windows oriented test tool…

I know they have “Ntttcp” fo Linux as well so I would suggest using NTttcp test-toll between the two platforms

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-bandwidth-testing

  • and finally - since you use a mixture of inbox-driver (& not mlnx_ofed) in Linux, w/WinoF-2 driver in Windows - chances are that you won’t get optimum performance