DRIVE OS 6.0.6 Target OS
Linux SDK Manager Version
188.8.131.5284 Host Machine Version
native Ubuntu Linux 20.04 Host installed with DRIVE OS DOCKER Containers
Describe the bug
From cuda docs,
1. CUDA for Tegra — CUDA for Tegra 12.3 documentation
we can see that A dGPU with separate DRAM memory can be connected to the Tegra device over PCIe or NVLink. It is currently supported only on the NVIDIA DRIVE platform.
An overview of a dGPU-connected Tegra® memory system is shown in
Does DriveOrin support NVLink/PCIe link to a dGPU such as RTX3090? If so, how to get the hardware and software guidance?
What’t the platform solution to chain one Orin Device to a Host PC with dGPU to transfer workload effieiently, is it NVSCI over PCIe? If so, can you provide more info about the solution?
I’ve search and read several topics in NVIDIA Forum in the following, I’d like to get official support in forum specifically with DriveOrin
in one of the Keynotes presenting the Jetson Xavier it was presented that the PCIE slot can used for an additional GPU. I have a few questions about that.
Which kind of GPU can be combined with the Jetson Xavier?
How do i power the additional GPU? A normal GPU in a computer needs additional power sources
How do i combine the Power from the Xavier with the additional GPU? Can i just use one or both of them?
Would be interesting to hear your thoughts and experiences about that. …
Of course I would need to provide external power for the GPU, but I was wondering if it is even possible to use the PCI slot to connect an external GPU to the Orin for potentially unlimited AI power.
I have had great success hosting some of the large language models like Alpaca and Vicuna (7B and 13B versions) on my Jetson Orin. I am upgrading my workstation with a 4090, which leaves my RTX3080 without a home.
I would like to try to use the 3080 in the PCIe slot available on the Orin dev kit. I have an extra ATX Power Source to supply the 3080, and I can rig and enclosure to hold it in place. What considerations am I going to have to take though in order to get this setup working?
Good point about GPU memory bandwidth. The datasheets claim 32GB 256-bit LPDDR5 at 204.8 GB/s for Jetson AGX Orin and 8GB 128-bit GDDR6 at 224 GB/s for RTX 3050. A small bandwidth penalty for Orin, but much larger fast memory.
Isn’t the most current CUDA version available for Jetson Orin too?
Could you please provide more information on your use case? Why are you considering either directly connecting the dGPU or using a host PC?
In order to get more compute resource in a ADAS system, a DriveOrin is connected to a dGPU RTX3090 through NVLINK/PCIe
In order to get more compute resource in a ADAS system, a DriveOrin is connected to a x86 PC with a dGPU RTX3090 through NVLINK/PCIe
Which one is possible and what’s the hardware and software guidance?
The shared diagram was in the context of DRIVE PX2/DRIVE AGX pegasus devkits where we had dGPU that can be connected to Tegra SoC. For DRIVE AGX orin devkit we dont have such configuration. Hope it clarifies.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.