NFS over RDMA on OEL 7.4

Hello my configuration is simple OEL 7.4 two Mellanox ConnectX-3 VPI cards , SX1036 switch and two very Fast NVMe drives.

So my problem is that I configured NFS over RDMA using Infiniband Support packages from OEL because OFED from Mellanox not support

NFS over RDMA from version 3.4 + .

Everything is working I can connect to server over RDMA and I can write/read from NFS server etc. but I have a problem with performance .

I done test on my stipe LV and fit shows me 900k IOPS and around 3,8 GB/s using 4k but when I do the same tests on NFS client I can’t get more

then 190k IOPS ? Problem is not the bandwidth because when I change the block size I can get even over 4GB/s but the problem seams to be number

of IOPS delivered from server to client.

I am asking maybe somebody have idea ?? I already change size and size to 1m but without any performance benefits.

My next steps will be configure Aggregation (LCAP) to see if it change something , now I’m using only one Port .

Adam

Hi Adam,

Indeed, NFSoRDMA (NFS over RDMA) is no longer supported by MLNX_OFED since driver version 3.4.

If you are using the Inbox Driver provided by the OS (RHEL 7.4), we Mellanox support and/or assistance is on a best effort basis due to the Inbox driver support being handle through the OS vendor.

Why? The driver which comes with the RHEL OS is derived from the upstream kernel. The OS Vendor gets the driver from kernel.org and modifies the code to their own needs.

We do not control that code nor do we know what modification the vendor has done to that code.

The versions are also not the same between Mellanox OFED driver and INBOX driver.

That being said, a few recommendations and suggestions below:

Make sure HCA’s FW are aligned and to the latest revision (Mellanox.com)

Make sure Switche(s) MLNX_OS are aligned and t the latest revision (Mellanox.com)

You can consult:

A) Performance Tuning for Mellanox Adapters

Performance Tuning for Mellanox Adapters https://community.mellanox.com/s/article/performance-tuning-for-mellanox-adapters

B) Red Hat Enterprise Linux Network Performance Tuning Guide

Red Hat Enterprise Linux Network Performance Tuning Guide https://community.mellanox.com/s/article/red-hat-enterprise-linux-network-performance-tuning-guide

C) Performance Tuning Guide https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7-Beta/html/Performance_Tuning_Guide/

Performance Tuning Guide https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7-Beta/html/Performance_Tuning_Guide/

Performance Tuning Guide https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7-Beta/html/Performance_Tuning_Guide/ (Link)

D) How To Configure LACP on Mellanox Switches

HowTo Configure LACP on Mellanox Switches https://community.mellanox.com/s/article/howto-configure-lacp-on-mellanox-switches

E) Troubleshoot LAG/MLAG LACP-PDU Rate Issues

Troubleshoot LAG/MLAG LACP-PDU Rate Issues https://community.mellanox.com/s/article/troubleshoot-lag-mlag-lacp-pdu-rate-issues

F) Perftest Package (RDMA)

Perftest Package https://community.mellanox.com/s/article/perftest-package

Sophie.