Interfacing Xavier modules to a Linux Host and use them as AI accelerators


Is there a way to interface one or many Xavier Modules via PCIe X16 expansion boards or USB3.1 ? I have several requests and use-cases for such setup and frankly don’t know if that’s possible… Host will be modern Linux x64 running recent distribution with NV Quadro GPU as primary display.

End goal would be to create an API that abstracts the communication and exposes accelerated services/operations like another CUDA-device, maybe?!? The more module you plugin the more things gets parallelized…

Much appreciated

Dear @maxxdesktop,
Are you asking about Drive xavier platform?

Hello, I am actually considering all options… Love the Docker route, but it looks like you need a Xavier type device… maybe I am wrong.

Thanks again

Hi @maxxdesktop,

I want to confirm what you are trying to do so we can best help you. Are you using a Drive AGX Jetson platform our the NVIDIA DRIVE platform for in vehicle applications? It would be an atypical use case to be plugging in multiple DRIVE platforms with a host PC as you have described.