Performance issue with inbox drivers

While evaluating the Mellanox ConnectX-5 network cards, we’ve encountered some network bandwidth issues when using the inbox drivers.

To measure network performance, we equip two machines with ConnectX-5 cards and connect them using a QSFP28 100G copper cable. The client machine has an Intel Core i9 9900 CPU, the server has an Intel Xeon Silver 4208 CPU.

The performance obtained when running 3 instances of the iperf3 program are as follow:

  • ~60Gbit/s with the version 5.13 of the Linux kernel that comes with Ubuntu 21.10

  • ~80Gbit/s with the version 5.16.10 of the Linux kernel

  • ~95-100Gbit/s with the drivers that come with the Mellanox OFED package

We followed the recommendations found in the performance tuning guide (, but it did not result in any improvement for both inbox drivers. The iperf program gives the same result.

We have multiple questions:

  • Is there some configuration we missed that could explain this gap of performance?

  • What exactly are the differences between the drivers found in the upstream Linux kernel and the ones in the Mellanox OFED packages?

  • What are the plans for the upstream driver ?

Hello Maxime,

Thank you for posting your inquiry on the NVIDIA Networking Community.

For performance recommendations when running INBOX drivers, support and recommendations are provided by Linux OS distro vendor.

When we run benchmark tests for TCP, we only use iperf(2) as iperf3 lacks several features like multithreading , multicast and bi-directional tests. See the following link on how-to run iperf successfully →

The article you referring to, is the correct article to use when you want to further optimize the tuning.

  • With MLNX_OFED, we provide the latest drivers and f/w available, including all the userspace utilities. When using INBOX or Upstream, you need to manually upgrade the f/w of the adapter to the supported version for the Upstream Kernel.
  • INBOX drivers are maintained by the Distro vendor
  • UpStream Kernel will contain as much as the latest driver code we provide with MLNX_OFED and we will continue to provide that code for the foreseen future.

Thank you and regards,

~NVIDIA Networking Technical Support