We have a POC we are building using our Gen4 PCIe Switch with Multiple Jetson Xavier’s. The idea would be to showcase multiple example visual flows using each Jetson and ideally include some “host to host” communication.
I was wondering if there is any information on setting up either of these use cases:
CASE 1:
Configure both Jetson’s for EP mode so that they both show up under a third party host (x8 gen4 EP)
Are the GPUs accessible this way?
CASE 2:
Configure both Jetson’s over NTB in the switch and provide a nt_perf path between them to run some workloads
We can provide the NTB drivers to allow for a punch-through in the two partitions to allow Xavier to Xavier communication
I’m looking for some high level guidance on what might have been tried.
Note that if you make a cable to connect the Jetson AGX Xavier Developer Kit to another system, be sure to disable the power rails on the cable to prevent power back-drive between systems, which could damage the devkit. You would also need to swap the TX/RX signals.
No, the Xavier’s integrated GPU cannot act as a discrete GPU over PCIe. It will only be usable directly from within that Xavier itself.
Hi Keith, NVLINK is not supported on Jetson AGX Xavier.
The 16 PCIe lanes of Xavier are split between 5 controllers (1×8, 1×4, 1×2, 2×1). The x8, x4, and x2 controllers are capable of operating in endpoint mode. So you can utilize those lanes for endpoint.
In theory you could develop a Windows driver for it, but the drivers provided are for Linux.