NFS/RDMA on CentOS 7, small files corruption by memory leak

In an HPC environment, we have nodes running CentOS 7.9.2009 / kernel 3.10.0-1160 mounting an NFS/RDMA server with the following, vendor (Mellanox) documented flags:

10.0.0.1:/pool0/home on /mnt/rdma type nfs (rw,relatime,sync,vers=3,rsize=262144,wsize=262144,namlen=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,nocto,noac,proto=rdma,port=20049,timeo=600,retrans=2,sec=sys,mountaddr=10.0.0.1,mountvers=3,mountproto=tcp,local_lock=none,addr=10.0.0.1)

If I create an empty file with 701 bytes or more

$ dd if=/dev/zero of=/mnt/rdma/test bs=1 count=701

I would receive exactly the expected file

hexdump /mnt/rdma/test

0000000 0000 0000 0000 0000 0000 0000 0000 0000

00002b0 0000 0000 0000 0000 0000 0000 0000

00002bd

Now if I do the same test with 700 bytes or less, the file is corrupted:

$ dd if=/dev/zero of=/mnt/rdma/test bs=1 count=700

$ hexdump /mnt/rdma/test | head -10

0000000 9dfe a757 0000 0100 0000 0000 0000 0000

0000010 0000 0000 0000 0000 0000 0000 0000 0100

0000020 0000 0100 0000 a401 0000 0100 0000 0000

0000030 0000 0000 0000 0000 0000 bc02 0000 0000

0000040 0000 0002 0000 0000 0000 0000 168c 0083

0000050 4a0f c612 0000 0000 b000 5a5b 1262 1e75

0000060 9d04 bc90 1262 1a75 c233 50e9 1262 1a75

0000070 c233 50e9 0000 bc02 0000 0100 0000 bc02

0000080 0000 0000 000a 5b5a 00b0 0000 0f9d 030c

0000090 0000 2d00 0000 0000 0000 0000 0000 0000

When trying the same commands with NFS over TCP instead of RDMA, the file is not corrupted.

Tweaking /proc/sys/sunrpc/rdma_memreg_strategy produces very strange results, for example with rdma_memreg_strategy == 6 the noise looks like that:

0000120 8898 123d 4506 c0f8 b0d6 3940 44b9 c0f8

0000130 bc92 ba64 4491 c0f8 165c 0ae4 448e c0f8

0000140 c82e 19e7 44a9 c0f8 2ddc 11f2 44dc c0f8

0000150 ad84 79aa 4520 c0f8 b4af 68ca 4571 c0f8

0000160 b752 b3ef 45cb c0f8 b04a 97b8 462d c0f8

0000170 7543 d694 4695 c0f8 d5dd 2cdc 4702 c0f8

0000180 a9bb f717 476e c0f8 4a78 13d9 47d7 c0f8

0000190 2a26 a9e4 4834 c0f8 71c8 3d91 4882 c0f8

00001a0 fcae 8d55 48bb c0f8 2b04 bcf8 48dd c0f8

00001b0 b04c a4be 48e8 c0f8 79d3 db6c 48dc c0f8

00001c0 9f84 c912 48bb c0f8 096c b285 4886 c0f8

00001d0 f0ff ee8e 483e c0f8 d171 e773 47e4 c0f8

00001e0 8e7b f3d1 4779 c0f8 4bbd d26f 46ff c0f8

Notice the pattern?

Now in the same environment, more ancient nodes running CentOS 7.1 with kernek 3.10.0-229 work just fine without any file corruption.

Adding to the mystery, I tried replacing CentOS drivers with Mellanox’s (MLNX_OFED_LINUX-4.9-4.1.7.0-rhel7.9-x86_64) and the size where the corruption occurs reduces to 640 bytes.

This is actually worse than I thought, your driver actually LEAKS memory to corrupted files. Here’s an xxd output from what’s supposed to be a 0 filled file:

cat 20220222-1

0000000: b449 1fc1 0000 0001 0000 0000 0000 0000 .I…

0000010: 0000 0000 0000 0000 0000 0000 0000 0001 …

0000020: 0000 0001 0000 01a4 0000 0001 0000 0000 …

0000030: 0000 0000 0000 0000 0000 0258 0000 0000 …X…

0000040: 0000 0200 0000 0000 0000 0000 8c16 8300 …

0000050: 0f4a 12c6 0000 0000 00b0 5fd3 6214 a09b .J…_.b…

0000060: 0c5f 0ffa 6214 a088 0f67 27e3 6214 a088 ._…b…g’.b…

0000070: 0f67 27e3 0000 0258 0000 0001 0000 0258 .g’…X…X

0000080: 0000 5c9a 0000 0000 0000 0000 0000 0020 …

0000090: 0100 0601 c612 4a0f 0083 168c 0000 0000 …J…

00000a0: 0000 0000 0a00 fd56 b000 0000 6d9c 0903 …V…m…

00000b0: 0000 0000 3d33 8000 0000 011a 0000 0002 …=3…

00000c0: 0000 011a 2d2d 2d2d 2d2d 2d2d 2d2d 2d2d …------------

00000d0: 0a0a 0a0a 0a2d 2d2d 2d2d 2d2d 2d2d 2d2d …-----------

00000e0: 2d2d 2d2d 2d2d 2d2d 2d2d 2d2d 2d2d 2d2d ----------------

00000f0: 2d2d 2d2d 2d2d 2d2d 2d2d 2d2d 2d2d 2049 -------------- I

0000100: 7465 7261 7469 6f6e 2038 3438 3928 2020 teration 8489(

0000110: 2035 2920 202d 2d2d 2d2d 2d2d 2d2d 2d2d 5) -----------

0000120: 2d2d 2d2d 2d2d 2d2d 2d2d 2d2d 2d2d 2d2d ----------------

0000130: 2d2d 2d2d 2d2d 2d2d 2d2d 2d2d 0a0a 0a20 ------------…

0000140: 2020 2050 4f54 4c4f 4b3a 2020 6370 7520 POTLOK: cpu

0000150: 7469 6d65 2020 2020 302e 3033 3532 3a20 time 0.0352:

0000160: 7265 616c 2074 696d 6520 2020 2030 2e30 real time 0.0

0000170: 3334 390a 2020 2020 5345 5444 494a 3a20 349. SETDIJ:

0000180: 2063 7075 2074 696d 6520 2020 2030 2e30 cpu time 0.0

0000190: 3332 383a 2072 6561 6c20 7469 6d65 2020 328: real time

00001a0: 2020 302e 3033 3239 0a20 2020 2045 4444 0.0329. EDD

00001b0: 4941 473a 2020 6370 7520 7469 6d65 2020 IAG: cpu time

00001c0: 2020 332e 3139 3638 3a20 7265 616c 2074 3.1968: real t

00001d0: 696d 6520 2020 2033 2e31 3938 310a 0000 ime 3.1981…

00001e0: 3131 3238 2020 2020 0a48 4352 2020 2020 1128 .HCR

00001f0: 2020 2020 2020 2020 2038 320a 2020 2020 82.

0000200: 2031 322e 3934 3834 3732 3233 2020 2020 12.94847223

0000210: 2020 2020 2d34 2e31 3938 3830 3635 3138 -4.198806518

0000220: 2020 2020 2020 2020 302e 3933 3536 3336 0.935636

0000230: 3039 3935 2020 2020 0a20 2020 3133 2e33 0995 . 13.3

0000240: 3837 3634 3434 3334 3120 2020 2020 2020 876444341

0000250: 322e 3732 3939 3339 2.729939

Hi Emile,

Thank you for using NVIDIA product and reporting memory leak about NFS over RDMA in OFED 4.9.

As from Mellanox OFED version 3.4-x and above NFSoRDMA is not supported anymore.

We’re confirming interanlly if OFED 4.9 support NFSoRDMA or not.

At the same time, ​we suggest you switch to use the drivers supplied by the OS vendor (INBOX) to use NFSoRDMA.

I’ll feedback here if there is update.

Regards,

Levei

Hi Level,

I also tried 4.9 and it is affected. Inbox kernels to this day, including current Linux kernel, are also affected by this bug. I’ve opened a bug report at RedHat’s bugzilla.

I published a fix here https://unix.stackexchange.com/a/692274/62822 TL;DR it’s the inline mode which is broken when reading data < 700B, forcing chunked mode for every file “fixes” the issue.