What's the proper memory region access flags for GPUDirect RDMA?

Hi everyone,

I’m trying GPUDirect RDMA technology to send some data in GPU memory to a remote host bypassing the GPU server’s CPU.

When I register the GPU memory to the RDMA protection domain with empty access flag, the send/recv operations all succeed without reporting any error, but the data received in remote host are just a bunch of zeros. When I change the GPU side mr access flag to IBV_ACCESS_LOCAL_WRITE, the remote host can receive the correct data.

From my perspective IBV_ACCESS_LOCAL_WRITE is not required on the GPU side because the RDMA HCA only reads the data in that region. What’s the problem here?

System environment: I am using Nvidia A100 with CUDA version 12.1 on the GPU side and ConnectX-6 Infiniband cards on both sides.

Hi chensy20,
This is not related to GPU or CPU memory. When you are using local buffer, without giving remote write access, we still set LOCAL_WRITE access flag to be able to write data locally before you are sending it to remote side.
This is what I can see in all RDMA samples not related to GPU.
Best regards,
Michael.

Does that mean we still have to set LOCAL_WRITE flag even if that buffer is only modified by user but not by RDMA operations?

Yes

I suggest to take some simple working sample for CPU memory and use it as reference

Ok, thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.