Red Hat 6.7  + NFS over RDMA

Hi Team,

facing issue while configuring NFS over RDMA…

Followed complete steps, as per below mention link

https://www.kernel.org/doc/Documentation/filesystems/nfs/nfs-rdma.txt https://www.kernel.org/doc/Documentation/filesystems/nfs/nfs-rdma.txt

Result:-

  • Able to mount the exported volume over RDMA on the destination client.
  • dmesg error"mlx4_0, memreg 5 slots 32"

Action taken:-

Updated the firmware of HCA card.

Need your guidance to resolve the issue.

Thank You

Atul Yadav

Hi Team,

As per the below link, we configured the setup. and able to mount the NFS mount points over RDMA.

HowTo Configure NFS over RDMA (RoCE) https://community.mellanox.com/s/article/howto-configure-nfs-over-rdma--roce-x

But in log file, constantly we are getting below mention error.

kernel: rpcrdma: connection to 172.16.2.240:20049 closed (-103)

kernel: rpcrdma: connection to 172.16.2.240:20049 on mlx4_0, memreg 5 slots 32 ird 16

kernel: rpcrdma: connection to 172.16.2.240:20049 closed (-103)

kernel: rpcrdma: connection to 172.16.2.240:20049 on mlx4_0, memreg 5 slots 32 ird 16

Please advice how to proceed further.

Thank You

Atul Yadav

Yes completed steps as per the document also, but there is no change in behavior.

Client

Red Hat Enterprise Linux Server release 6.7 (Santiago)

2.6.32-573.el6.x86_64

---- Performing Adapter Device Self Test ----

Number of CAs Detected … 1

PCI Device Check … PASS

Kernel Arch … x86_64

Host Driver Version … MLNX_OFED_LINUX-3.3-1.0.0.0 (OFED-3.3-1.0.0): 2.6.32-573.el6.x86_64

Host Driver RPM Check … PASS

Firmware on CA #0 VPI … v2.35.5000

Firmware Check on CA #0 (VPI) … NA

REASON: NO required fw version

Host Driver Initialization … PASS

Number of CA Ports Active … 1

Port State of Port #1 on CA #0 (VPI)… UP 4X FDR (InfiniBand)

Error Counter Check on CA #0 (VPI)… PASS

Kernel Syslog Check … PASS

Node GUID on CA #0 (VPI) … 7c:fe:90:03:00:17:21:80

172.16.2.240:/storage on /test type nfs (rw,rdma,port=20049,addr=172.16.2.240)

Error

rpcrdma: connection to 172.16.2.240:20049 on mlx4_0, memreg 5 slots 32 ird 16

rpcrdma: connection to 172.16.2.240:20049 closed (-103)

rpcrdma: connection to 172.16.2.240:20049 on mlx4_0, memreg 5 slots 32 ird 16

Please guide us …

Thank you

Atul Yadav

See here:

check all the references and comments as well.

HowTo Configure NFS over RDMA (RoCE) https://community.mellanox.com/s/article/howto-configure-nfs-over-rdma--roce-x

Hi Atul,

Per my colleague Ophir, please make sure to follow the procedures documented in the following document.

HowTo Configure NFS over RDMA (RoCE) https://community.mellanox.com/s/article/howto-configure-nfs-over-rdma--roce-x

Thank you,

Sophie.

I’ve had no issues with CentOS 7 and CentOS 6 using HP ConnectX-2 cards and older InfiniHost III cards.

I did have issues trying to get some Isiolon InfiniHost III cards working - I just gave up - firmware unreadable, unflashable - drivers never worked with them.

I’m using the stock RHEL / CentOS drivers with CentOS 6 & 7 and first did NFSoIB and then used NFSoRDMA. Don’t forget to make extra holes in the firewall, or allow all for the interface in the firewall rules. If you’re link is pingable - you’re 95% of the way there - any other problems getting NFS over IB working are due to firewall rules. Then getting NFS o RDMA going just requires the extra tweaks mentioned on the web page above.

I had some fun doing tests using the older hardware I have. The PCIe bus gives out at 25Gb/s and my SSD file server gives out well before then. But I needed 10Gb/s ethernet, or better, to really exploit the speed of my SSD file servers. These test results are measured via fio; so they’re a test of the link speed, not the file systems involved.

NetworkGB Data

in 30 secAggregate

Bandwidth (MB/s, Gb/s)Bandwidth

(MB/s, Mb/s)latency (ms)iopsQDR IB 40Gb/s

NFS over RDMA943,100, 25802, 6.40.615 12,535DDR IB 20Gb/s

NFS over RDMA24.4834, 6.7208, 1.72.43256SDR IB 10Gb/s

NFS over RDMA22.3762, 6.1190, 1.52.572978QDR IB 40Gb/s16.7568, 4.5142, 1.13.42218DDR IB 20Gb/s13.9473, 3.8118, 0.944.11845SDR IB 10Gb/s13.8470, 3.8117, 0.944.2184010Gb/s ethernet5.9202, 1.651, 0.419.77931Gb/s ethernet3.2112, 0.902817.8438100Mb/s ethernet346MB11.52.91744510Mb/s ethernet via switch36MB1.2279kB/s1797410Mb/s ethernet via hub33MB1.0260kB/s19204