Hi,
I’m wondering if the Orin Nano supports GPUDirect RDMA. I’ve been working with the DOCA sample applications. Specifically doca_gpu_packet_processing. I’ve got it complied successfully, but when it asks for the GPU’s pcie address, I don’t think there is one as it’s an integrated GPU. Is this due to how the sample program is implemented, or is GPUDirect RDMA not supported on the Orin Nano?
If it’s not supported, is there an equivalent technology I’m not finding?
*** Please note that this reply is generated by LLM automatically ***
GPUDirect RDMA is a technology that allows for direct memory access between NVIDIA GPUs and other PCIe devices, bypassing the CPU. However, it is primarily designed for discrete GPUs, not integrated GPUs like the ones found in the Jetson Orin Nano.
The Jetson Orin Nano has an integrated GPU, which means it shares the same memory space as the CPU. As a result, GPUDirect RDMA is not directly applicable to the Orin Nano.
The reason you’re not finding a PCIe address for the GPU is that the integrated GPU in the Orin Nano does not have a separate PCIe address. Instead, it uses the CPU’s memory interface to access system memory.
While GPUDirect RDMA is not supported on the Orin Nano, there are alternative technologies that can provide similar benefits. One such technology is GPUDirect Storage, which allows for direct memory access between the GPU and storage devices, such as NVMe SSDs. However, this technology is primarily designed for discrete GPUs and may not be directly applicable to the Orin Nano.
Another alternative is to use the CPU’s memory interface to access system memory, which can provide similar benefits to GPUDirect RDMA. This approach can be used in conjunction with other technologies, such as CUDA’s Unified Memory, to provide a similar level of performance and efficiency.
In terms of equivalent technologies, the Orin Nano supports other NVIDIA technologies, such as:
- CUDA: A parallel computing platform that allows developers to harness the power of the GPU for general-purpose computing.
- cuDNN: A library of GPU-accelerated primitives for deep neural networks.
- TensorRT: A software development kit for optimizing and deploying deep learning models on NVIDIA GPUs.
These technologies can be used to accelerate various workloads on the Orin Nano, including machine learning, computer vision, and more.
In summary, while GPUDirect RDMA is not supported on the Jetson Orin Nano, there are alternative technologies that can provide similar benefits. Developers can use the CPU’s memory interface, CUDA, cuDNN, and TensorRT to accelerate various workloads on the Orin Nano.
Comparison with other NVIDIA modules:
- Jetson Xavier NX: This module also has an integrated GPU and does not support GPUDirect RDMA.
- Jetson AGX Orin: This module has a discrete GPU and supports GPUDirect RDMA.
- NVIDIA Tesla V100: This is a discrete GPU that supports GPUDirect RDMA.
In general, GPUDirect RDMA is supported on NVIDIA’s discrete GPUs, but not on their integrated GPUs.
*** Please give the thumbs up if you get answers you like. Or provide feedback to help us improve the answer. ***
Hi,
Based on the document below, you will need a DPU for the DOCA.
But Jetson does support GPUDirect RDMA with an external FPGA.
Please find our example below:
Thanks.
I see that the picoevb sample supports the AGX Xavier. Can it be ported to the orin nano?
Side question; can you explain (or point me to resources) on how the Orin and AGX can support GPUDirect RDMA with the the IGPU without the IGPU being on the same PCI root complex?
Hi,
Yes, please check out the rel-36+ branch for the JetPack 6.
You can find some details for the GPU RDMA technique on Jetson in the above link.
On step 3, the FPGA driver will pin a buffer to the CUDA driver:
Thanks.