[QST] Use which among NVLink/PCIe to chain DriveOrin to one dGPU to scale performance?

Required Info:

  • Software Version
    DRIVE OS 6.0.6
  • Target OS
    Linux
  • SDK Manager Version
    1.9.2.10884
  • Host Machine Version
    native Ubuntu Linux 20.04 Host installed with DRIVE OS DOCKER Containers

Describe the bug

From cuda docs, 1. CUDA for Tegra — CUDA for Tegra 12.3 documentation
we can see that A dGPU with separate DRAM memory can be connected to the Tegra device over PCIe or NVLink. It is currently supported only on the NVIDIA DRIVE platform.

An overview of a dGPU-connected Tegra® memory system is shown in Figure 1.

  1. Does DriveOrin support NVLink/PCIe link to a dGPU such as RTX3090? If so, how to get the hardware and software guidance?
  2. What’t the platform solution to chain one Orin Device to a Host PC with dGPU to transfer workload effieiently, is it NVSCI over PCIe? If so, can you provide more info about the solution?

Additional context

I’ve search and read several topics in NVIDIA Forum in the following, I’d like to get official support in forum specifically with DriveOrin

Friendly ping @SivaRamaKrishnaNV and @VickNV

Could you please provide more information on your use case? Why are you considering either directly connecting the dGPU or using a host PC?

usecase1:

In order to get more compute resource in a ADAS system, a DriveOrin is connected to a dGPU RTX3090 through NVLINK/PCIe

usecase2:

In order to get more compute resource in a ADAS system, a DriveOrin is connected to a x86 PC with a dGPU RTX3090 through NVLINK/PCIe

Which one is possible and what’s the hardware and software guidance?

Thanks.

Dear @lizhensheng,
The shared diagram was in the context of DRIVE PX2/DRIVE AGX pegasus devkits where we had dGPU that can be connected to Tegra SoC. For DRIVE AGX orin devkit we dont have such configuration. Hope it clarifies.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.