A low throughput enviornment with connect-x3 40G VPI ethernet card

Server: Windows server 2019, Dell R7x0, Mellanox connect-x3 40Gb
Client: I7 13xxx CPU, X520 10Gb, disable interrupt mediation
Test with iperf:
64K tcp window, 400Mb
1M tcp window, 10Gb
seems normal, but compare with another server with same networkcard, the iperf result with 64K window can reach 10Gb, what probably wrong?

I would check that both servers are aligned with the same WinOF driver version/FW

(Mellanox OFED for Windows - WinOF / WinOF-2 >>> WinOF download)

Note that CX3 is end of life and end of service accordingly.

Are the OS between servers the same versions? Patched with the same KB’s?

I would suggest for comparison purpose to validate that both servers are properly tunned.

We have community articles that address that topic (performance) accordingly.

Some pointers:

BIOS settings set to max performance profile.

Disable power saving.

MTU setting default 1500 or Jumbo 9000.

MTU setting on switch(s) port(s) default or 9000.

Cables used and length.

Device manager - > Network Adapters → Mellanox HCA Properties → Advanced → RX/TX buffers settings

RSS settings, the cores from the closest NUMA node should be used.

Avoid using core 0 which is utilized for OS tasks.

The preferred tool for performance testing (for throughput measurement) is NTttcp for comparison purpose.

iperf can be used, make sure the version used is the same and parallelism is supported and used with the tool.

For additional information, consult our WinOF UM on the performance section.

thanks, we’ll have a try. The client OS for testing is windows 10 and Intel x520 net card, server OS is windows server 2019