RTX A4000 support GPUDirect RDMA?

Hi, I would like to use this example code https://github.com/NVIDIA/jetson-rdma-picoevb to streaming data from a Xilinx Avelo U50 to RTX A4000 GPU.
Before I purchase the GPU, I want to confirm that A4000 supports GPUDirect RDMA.


Yes works.
I am using RDMA with a similar driver between a Xilinx FPGA and my RTXA4000.

1 Like

can you confirm you are doing some sort of PCIe Peer to peer DMA from the fpga board into the gpu memory ?
I want to do this with a connectx NIC (mellanox). It should work with the standard nvidia-peermem module. What bother me is that this GPU is no more branded as a Quadro and that the doc states that GPUDirect is for Quadro (Tesla) series…

BTW are your throughput capped at 80% of the theoretical max bw ?

I don’t use the NIC, I just do a peer to peer PCIe transfer from FPGA to GPU per PCIe switch.
The dropping of the Quadro is just marketing, the A4000 s still a “Quadro” in spirit…RTX + A4000 + Quadro just got too long I guess.
RDMA clearly works on the A4000 (I have it running).

The PCIe efficiency depends on multiple factors, e.g. the maximum payload size (MPS) given by normally the device with lowest MPS, normally some crappy PCIe x1 device.
I got rid of 128 bytes ones and have 256 bytes MPS.
Then it depends if traffic is bidirectional or not, as the PCIe Acks take some BW.
I have about 80% of theoretical x8 Gen3 on my system.

@raph38130 I was curious if you managed to get this setup (connectx NIC to A4000) working via RDMA? I am interested in similar but I noticed the DPDK cuda driver only seems to support very high end cards https://doc.dpdk.org/guides/gpus/cuda.html

yes it works out of the box using nvidia-peermem kernel module (A6000 / connectx-5 GPUDiret)