External graphics card on Xavier AGX

Is it possible to run a Nvidia graphics card (Quadro, RTX, etc.) in a Xavier AGX? This has been asked before, but there is not a clear answer.

I’ve tried to put together the threads I’ve found on this issue below. Sorry for the thread numbers, but as a new account I can only put in one link, and I used that for the link to the working case.

In 2019, in the thread 65310 @dusty_nv said that it was planned for a future release. There is a link to a FAQ, but the updated FAQ does not seem to address this.

In 2021, @1197419645 posted a video a GTX1080TI apparently working in a Xavier AGX, however no software versions were given. If @1197419645 has more details and could post them, that would be helpful.
Add external graphics card on Jetson AGX Xavier Developer Kit - Jetson & Embedded Systems / Jetson AGX Xavier - NVIDIA Developer Forums

In 2022 @dusty_nv posted in thread 202793:

those PCIe drivers and discrete GPU aren’t supported on Jetson devices running JetPack-L4T

I’ve tried to install versions 460.67, 470.129.06, 510.73.05 and 515.43.04 of the “NVIDIA Linux aarch64” driver, but always get the error:

The kernel was built with gcc version 7.3.1 20180425 [linaro-7.3-2018.05 revision d29120a424ecfbc167ef90065c0eeb7f91977701] (Linaro GCC 7.3-2018.05) , but the current compiler version is cc (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0.

I ignore and continue with the install but eventually get the error:

ERROR: Unable to load the kernel module ‘nvidia.ko’. This happens most frequently when this kernel module was built against the wrong or improperly configured kernel sources, with a version of gcc that differs from the one used to build the target kernel, or if another driver, such as nouveau, is present and prevents the NVIDIA kernel module from obtaining ownership of the NVIDIA GPU(s), or no NVIDIA GPU installed in this system is supported by this NVIDIA Linux graphics driver release.

This error is also noted in thread 154659

I’m currently using train_ssd.py and get about 15 images a second on the AGX and 15 images a second on the graphics card (when installed in a PC.) I’d like to combine to reduce the training time and also use less power.

Will this be supported either in Jetpack 4 or 5 or some other way? Is it possible to get it to work without it being supported?

No, as Dusty mentioned, those PCIe drivers and discrete GPU aren’t supported on Jetson devices running JetPack-L4T.

If the external NV dGPU card is a must have rquirement for your project, you can use Clara AGX devkit, see Advanced Computation for AI-Powered Medical Devices | NVIDIA

Thank @kayccc. Is this feature still planned?

Sorry to say, No.

Glad to see other people trying this solution, I can tell you what I did and how it performed after driving an external graphics card.

First, what I did:
Pay attention to the information output by the failure to install the driver kernel module, which shows the most critical information. He said that your kernel was built with gcc version 7.3.1, and the driver module was built with gcc version 7.5.0. This is The main reason why kernel modules cannot be loaded. The solution is to compile a 7.3.1 version of gcc from source, and then set the command line to use the new compiler to install the graphics card driver kernel module.

Then talk about performance:
very bad. First, installing an external graphics card driver will render the built-in graphics card completely unavailable. Second, the newly installed external graphics card can only be used for simple tasks such as display and some label classification, and once some advanced algorithms are used, the device will shut down completely.

@1197419645, thanks for the steps, that makes sense.

Just FYI, I installed JetPack 5.0.1DP and I tried installing version 515.43.04 of Linux aarch64 driver and it installed with no warnings. The card appears to be functional (it shows up in nvidia-smi.) I also tried with version 510.73.05, that works as well, but warns about incompatibilities with build environment at the beginning.

It works basically as you describe, where the onboard graphics seems to be disabled, but I was able to get the GUI displayed on the output of the external graphics card.

I tried to test with the train_ssd.py script but the version of PyTorch included (1.12) with dusty’s jetson-inference installer doesn’t support my card (my card is CUDA capabilties 5.2, the PyTorch included is built for 6.2 and greater), so I hit a dead end in my testing. I may try to compile PyTorch manually at some point and see if I can get it to run.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.