I’ve had no issues with CentOS 7 and CentOS 6 using HP ConnectX-2 cards and older InfiniHost III cards.
I did have issues trying to get some Isiolon InfiniHost III cards working - I just gave up - firmware unreadable, unflashable - drivers never worked with them.
I’m using the stock RHEL / CentOS drivers with CentOS 6 & 7 and first did NFSoIB and then used NFSoRDMA. Don’t forget to make extra holes in the firewall, or allow all for the interface in the firewall rules. If you’re link is pingable - you’re 95% of the way there - any other problems getting NFS over IB working are due to firewall rules. Then getting NFS o RDMA going just requires the extra tweaks mentioned on the web page above.
I had some fun doing tests using the older hardware I have. The PCIe bus gives out at 25Gb/s and my SSD file server gives out well before then. But I needed 10Gb/s ethernet, or better, to really exploit the speed of my SSD file servers. These test results are measured via fio; so they’re a test of the link speed, not the file systems involved.
in 30 secAggregate
Bandwidth (MB/s, Gb/s)Bandwidth
(MB/s, Mb/s)latency (ms)iopsQDR IB 40Gb/s
NFS over RDMA943,100, 25802, 6.40.615 12,535DDR IB 20Gb/s
NFS over RDMA24.4834, 6.7208, 1.72.43256SDR IB 10Gb/s
NFS over RDMA22.3762, 6.1190, 1.52.572978QDR IB 40Gb/s16.7568, 4.5142, 1.13.42218DDR IB 20Gb/s13.9473, 3.8118, 0.944.11845SDR IB 10Gb/s13.8470, 3.8117, 0.944.2184010Gb/s ethernet5.9202, 1.651, 0.419.77931Gb/s ethernet3.2112, 0.902817.8438100Mb/s ethernet346MB11.52.91744510Mb/s ethernet via switch36MB1.2279kB/s1797410Mb/s ethernet via hub33MB1.0260kB/s19204