Hello there,
I am using Mellanox Infiniband for many years now and always had great performance.
For a small, but not completely unimportant system I installed two infiniband cards connecting them directly and set up ipoib (rdma is not an option in this case).
Unfortunately the perfomance is very very low. I know that ipoib is not the best of protocols to use, but iperf is reaching 1.2gbit/s, which is way way lower than any expected performance.
Some information about the systems used:
System1 running debian:
‘’’ uname -a
Linux space 4.15.18-12-pve #1 SMP PVE 4.15.18-35 (Wed, 13 Mar 2019 08:24:42 +0100) x86_64 GNU/Linux
ibstat
CA ‘mlx4_0’
CA type: MT4099
Number of ports: 1
Firmware version: 2.35.5100
Hardware version: 1
Node GUID: 0x7cfe900300b1c470
System image GUID: 0x7cfe900300b1c473
Port 1:
State: Active
Physical state: LinkUp
Rate: 20
Base lid: 2
LMC: 0
SM lid: 1
Capability mask: 0x02514868
Port GUID: 0x7cfe900300b1c471
Link layer: InfiniBand
lspci | grep Mellanox
83:00.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3]
‘’’
system 2 running arch:
‘’’
uname -a
Linux desktop 5.0.7-arch1-1-ARCH #1 SMP PREEMPT Mon Apr 8 10:37:08 UTC 2019 x86_64 GNU/Linux
ibstat
CA ‘mthca0’
CA type: MT25204
Number of ports: 1
Firmware version: 1.2.0
Hardware version: a0
Node GUID: 0x0002c9020020d7e0
System image GUID: 0x0002c9020020d7e3
Port 1:
State: Active
Physical state: LinkUp
Rate: 20
Base lid: 1
LMC: 0
SM lid: 1
Capability mask: 0x02590a6a
Port GUID: 0x0002c9020020d7e1
Link layer: InfiniBand
lspci | grep Mellanox
02:00.0 InfiniBand: Mellanox Technologies MT25204 [InfiniHost III Lx HCA] (rev 20)
‘’’
I know that system 2 is using some old infiniband card, but is this performace really what I can expect?