Using GPUDirect RDMA under OpenCL

Hello,
I’m attempting to transfer data between FPGA and GPU by using Nvidia GPUDirect RDMA techniques in OpenCL platform. On the FPGA side I use the Avalon msgDMA core to handle the data communication. Currently I can transfer data from FPGA to GPU, but the other direction cannot work.
The settings on the FPGA is like this, at first the user application sends some DMA descriptors to the DMA controller and then the controller would do the actual DMA transfer. The DMA descriptors contain information about the source and destination addresses that the data will be transferred.
Because the direction of FPGA to GPU can work, I think the destination and source addresses are exactly right. Therefore I wonder if it’s the problem on the GPU side. By using GPUDirect RDMA techniques, the GPU memory is pinned to the system memory. This is implemented by the function nvidia_p2p_get_pages(…). Now I want to know when pinning GPU memory to the main memory, would the Nvidia driver set the write and read permissions of this pinned memory? If so, how could I change the r/w privilege of the pinned buffer? If not, does there exist some problems in my solutions?
Hope anyone can contact me about this issue. Thanks!

1 Like

This would be a sick feature. What would it take for NVIDIA to implement this? They already have existing extensions to OpenCL: Index of /OpenCL/extensions/nv
I imagine it would be similar to AMDs cl_amd_bus_addressable_memory which allows third-party PCIe devices to access GPU memory.

You can always request new features via a bug report.

1 Like