Why is NFSoRDMA in CentOS 7.6.1810 limited to 10 Gbps?

I have a cluster where all of the nodes (head and slave nodes) are using ConnectX-4 dual port 4X EDR IB (MCX456A-ECAT) connected to an externally managed MSB-7890 36-port 4X EDR IB switch.

All of the nodes are also running CentOS 7.6.1810 with the software group ‘Infiniband Support’ installed (because this one still supports NFSoRDMA).

On the head node, I have four Samsung 860 EVO 1 TB SATA 6 Gbps SSDs in RAID0 through the Marvell 9230 controller on an Asus P9X79-E WS motherboard.

Testing on the headnode itself shows that I can get around 21.9 Gbps total throughput when running:

$ time -p dd if=/dev/zero of=10Gfile bs=1024k count=10240

But when I trying to do the same thing over IB, I can only get about 8.5 Gbps at best.

NFSoRDMA is configured properly.

Here is /etc/exports:

/home/cluster *(rw,async,no_root_squash,no_all_squash,no_subtree_check)

Here is /etc/rdma/rdma.conf:

Load IPoIB

IPOIB_LOAD=yes

Load SRP (SCSI Remote Protocol initiator support) module

SRP_LOAD=yes

Load SRPT (SCSI Remote Protocol target support) module

SRPT_LOAD=yes

Load iSER (iSCSI over RDMA initiator support) module

ISER_LOAD=yes

Load iSERT (iSCSI over RDMA target support) module

ISERT_LOAD=yes

Load RDS (Reliable Datagram Service) network protocol

RDS_LOAD=no

Load NFSoRDMA client transport module

XPRTRDMA_LOAD=yes

Load NFSoRDMA server transport module

SVCRDMA_LOAD=yes

Load Tech Preview device driver modules

TECH_PREVIEW_LOAD=no

Here is /etc/fstab on the slave nodes:

aes0:/home/cluster /home/cluster nfs defaults,rdma,port=20049 0 0

And here is confirmation that the NFS share is mounted using RDMA:

aes0:/home/cluster on /home/cluster type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=rdma,port=20049,timeo=600,retrans=2,sec=sys,clientaddr=xxxxxxx,local_lock=none,addr=xxxxxxx)

The RAID volume is mounted like this:

$ mount

/dev/sdb1 on /home/cluster type xfs (rw,relatime,attr2,inode64,noquota)

I don’t really understand why the NFSoRDMA mount appears to be capped at those less than 10 Gbps speeds.

Your help is greatly appreciated.

Thank you.

Hello Ewen,

Many thanks for posting your question on the Mellanox Community.

Unfortunately, as you running CentOS INBOX drivers, support for NFSoRDMA needs to be obtained through the Linux Distro, as stated on our website → https://www.mellanox.com/page/inbox_drivers

Our own driver, Mellanox OFED, does not support NFSoRDMA.

Many thanks,

~Mellanox Technical Support

“Our own driver, Mellanox OFED, does not support NFSoRDMA.”

Yes, I know.

And that a problem for me because I don’t have the in-house knowledge nor expertise to deploy iSER since NFSoRDMA was removed from the Mellanox driver.

This is precisely the reason why I am using CentOS’ “INBOX” driver specifically BECAUSE of this.

That statement, by the way, is in direct contradiction of Mellanox’s marketing materials (see: https://www.mellanox.com/related-docs/products/IB_Adapter_card_brochure.pdf) where in the IB Adapter card brochure, it states:

“Adapters support SRP, iSER, NFS RDMA, SMB Direct, SCSI and iSCSI, as well as NVMe over Fabrics storage protocols.”

But as you have just stated, Mellanox’s drivers does NOT support NFS over RDMA, and therefore; the brochure is incorrect and/or false/misleading advertising.