Am not a software guy - so I am seeking guidance. I have a server sending a stream of data over 40 GbE. I have a GPU Server that hosts a Tesla GPU that uses CUDA to process data (right now copied from host memory). The GPU Server also has a 40 GbE NIC that supports GPUDirect. I would like to process data coming from the remote server. Can I accomplish everything I need to from within CUDA, or do I need something that runs separately - to RDMA the data from the NIC to GPU memory. I am rather confused…
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Uses CUDA to process data | 2 | 501 | August 19, 2019 | |
| cuda with RDMA | 7 | 1424 | March 14, 2019 | |
| GPU and Ethernet communication | 4 | 3202 | October 11, 2023 | |
| Peerdirect Cuda <-> Ethernet NIC | 0 | 747 | January 31, 2017 | |
| RDMA from local host memory to remote GPU memory? | 1 | 625 | April 10, 2019 | |
| DMA from nic to gpu | 0 | 1434 | February 8, 2009 | |
| gpudirect rdma samples | 1 | 884 | April 8, 2014 | |
| Is it possible to perform RDMA from GPU to remote memory region | 0 | 549 | October 21, 2022 | |
| GPU and NIC with RDMA support (RoCE or iWARP implementation) | 2 | 55 | December 11, 2025 | |
| GPUDirect RDMA for UDP packets | 1 | 1110 | January 10, 2021 |