We have a POC we are building using our Gen4 PCIe Switch for GTC. We would like to show our ability to tie multiple NVIDIA resources together.
My understanding is that in a topology where one JETSON is an endpoint to another JETSON, the RC-JETSON will not have any access to the Endpoint-JETSON. So in such an scenerio we would not gain any additional performance.
However, what if I was able to add an additional GPU resource to the JETSON host? Could JETSON detect and utilize these additional resources during video inference?
The two Jetson’s would be able to DMA memory to each other, but they would not be able to directly control the other’s GPU. Each Xavier controls it’s own GPU. Distributing the computational workload would be left up to the user’s application or a multiprocessing framework like MPI.