IPoIB performance issue!!

Hi All,

I’m new to Infiniband and I want to do benchmark betweeen ethernet and IPoIB, when I do a measurement I see only 8 times improvment compared to ethernet.

**Header 1Receive socket sizeSend Socket sizeSend message sizeThroughput(Mbps) Eth****Throughput(Mbps) IB(IPoIB)**TCP and IPoIB(connected)64K64K4K8566751UDP and IPoIB(datagram)64K64K4K9436353

IB Config: 56Gbit (4x FDR )

Ethernet: 1 Gbps

Benchmark tool: Netperf

Thanks

Performance measurement… a world within a world!

Dealing with Mellanox InfiniBand for the last decade++, I know for fact that it can do more. it is just a matter of use case.

debugging complains about performance can go many directions and can be either simple or very involved.

The community site has few examples of other users who already worked on performance tuning issues. please search.

Given IB is a different ballgame then 1G Eth, few things to consider:

  • Server: driving 56g/s doesn’t come from nowhere. strong server with capable PCI bus is a key. see this post: Infiniband Performance Analysis?

  • network: you didn’t mention which switch are you using, type, how many, is the network blocking?

  • versions of OFED stack - with IB you can run data on few protocols including native RDMA (best performance), ipoib (which is what you are doing) and few other protocols to mention. you can gain best performance for ipoib with the latest mellanox ofed version 2.1 (available on Mellanox.com)

and finally, checkout the following Performance Tuning Guide: http://www.mellanox.com/related-docs/prod_software/Performance_Tuning_Guide_for_Mellanox_Network_Adapters.pdf http://www.mellanox.com/related-docs/prod_software/Performance_Tuning_Guide_for_Mellanox_Network_Adapters.pdf with lots of good advises for improving your performance.

Good luck. Please keep us posted on how things look.

Thanks a lot “yairi” and “ingvar_j Infrastructure & Networking - NVIDIA Developer Forums ” for your response. Finally I was able to achieve ~20 Gbs on FDR.

with

OFED : MLNX_OFED_LINUX-2.0-3.0.0 (OFED-2.0-3.0.0)

Firmaware: 2.30.3000

Infiniband card type: x4 FDR, Rate: 56Gbps

CDM: netperf -H 192.168.180.101 -t TCP_STREAM

Thanks Ingvar. 20Gb/s is a much better number.

I just did a quick test, hosts connected over QDR fabric.

If you give the switches -c -C to netperf, You can see the CPU usage of the hosts as well.

user@host01 ]# netperf -H 10.10.100.21 -l 10 -t TCP_STREAM -c -C – -m 65536

I got a thruput of 12.9 Gbits/s using 5,5% CPU

Testing with iperf, gave me up to 20Gb/s on the same hosts.

Regards, Ingvar