Am not a software guy - so I am seeking guidance. I have a server sending a stream of data over 40 GbE. I have a GPU Server that hosts a Tesla GPU that uses CUDA to process data (right now copied from host memory). The GPU Server also has a 40 GbE NIC that supports GPUDirect. I would like to process data coming from the remote server. Can I accomplish everything I need to from within CUDA, or do I need something that runs separately - to RDMA the data from the NIC to GPU memory. I am rather confused…
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Uses CUDA to process data | 2 | 454 | August 19, 2019 | |
RDMA using GPUDirect | 0 | 721 | March 24, 2014 | |
RDMA from local host memory to remote GPU memory? | 1 | 572 | April 10, 2019 | |
cuda with RDMA | 7 | 1287 | March 14, 2019 | |
Is it possible to perform RDMA from GPU to remote memory region | 0 | 511 | October 21, 2022 | |
gpudirect rdma samples | 1 | 800 | April 8, 2014 | |
GPU required for RDMA GPUdirect | 3 | 559 | February 29, 2024 | |
Copy to CUDA GPU Memory from a PCI Device | 2 | 913 | June 12, 2013 | |
Stream Data | 1 | 1578 | June 9, 2011 | |
DMA between GPU and other peripherals | 2 | 5359 | July 23, 2008 |