Jetson-rdma-picoevb not working in x86 PC mode

Hi,

I’m using the jetson-rdma-picoevb software repository to test a PCIe setup between an x86 machine, a Xilinx FPGA, and Turing GPU. The FPGA and GPU are attached to the x86 root complex by a PCIe switch.

I am able to get the jetson-rdma-picoevb software to work using the host allocation DMA. IE, it sets up the FPGA DMA to perform a transaction from the x86 host memory to the FPGA, and from the FPGA memory to the x86 host memory.

However, when I try to run the cuda allocation DMA (IE, transaction from GPU memory to FPGA, and vice versa), the destination cuda buffer is all zeros. The driver reports no errors, and using PCIe switch monitoring tools, I can see an uptick in PCIe throughput on the correct ports during the DMA transaction.

I’ve done testing with my own software, and verified the nvidia_p2p_get_pages call pins pages into the correct BAR. I’ve verified that the x86 host can read and write the the correct BAR section corresponding to the pinned pages.

It seems as if the GPU is dropping PCIe transactions exclusively from the FPGA. Is there some sort of permissions configuration that must be made to allow a 3rd party device modify GPU BAR sections? What other debug steps can I take here?

Thanks for your time

RDMA is a hardware-dependent feature. The sample is for the Jetson platform.

For desktop GPU, please check the below document:

The repository I’m referencing says it supports desktop GPU configuration. GitHub - NVIDIA/jetson-rdma-picoevb: Minimal HW-based demo of GPUDirect RDMA on NVIDIA Jetson AGX Xavier running L4T

I have read in full that document, and have made sure my code matches it

1 Like