Can I use GPUDirect with VMA for an UDP multicast

My goal is to receive an UDP multicast with VMA and send directly the packets to Nvidia card memory for further procressing without making any copy of the packets in the CPU system memory. According to VMA documentation, this library make use of Infiniband Verbs to bypass Linux TCP/IP stack. According to OFED code from Mellanox, you only have to pass the CUDA malloc’ed address to the infiniband memory register call, in order to GPUDirect to work once nvidia_peer_memory driver from Mellanox is loaded.

Is it possible to have VMA working with CUDA malloc’ed addresses so that frames are directly written to GPU memory?.

Hi,

GPUDirect allows you as you said to pin memory from your GPU and to register it via standard API of memory registration.

Once it is done, it doesn’t matter to VMA which memory is used for traffic.

If you have any issue with this configuration, please open a ticket.

Marc