GPU Usage

Please provide the following info (tick the boxes after creating this topic):
Software Version
[*] DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
[*] Linux
QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
[*] DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
1.9.3.10904
[*] other

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
[*] native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

Hi,

What are the methods by which the GPU can be used in the Orin board for this release.
My current understanding the model has to be compiled by the tensorRTOptimizationTool for the GPU to be used. Is this correct?
Also, would like to confirm how these models can be called. Can it be run using python (using the tensorrt API??). Or is it available exclusively through the DriveWorks API?

Also, once a model is run, how to confirm whether the model is run in GPU?

Is it using the tegrastats utility. If yes, is the GPU usage measured using the GR3D_FREQ value. And how to attribute this value to the model usage rather than the GUI/render usage?

Thanks,
Gokul

Dear @gokul.soman,
My current understanding the model has to be compiled by the tensorRTOptimizationTool for the GPU to be used. Is this correct?

Yes. If you want to integrate your model in DW DNN framework, you need to use tensorRT_Optimizatiom tool to generate DW compatible model.

Can it be run using python (using the tensorrt API?? Or is it available exclusively through the DriveWorks API?

You can directly use TRT API to generate TRT model from ONNX. You can also use trtexec tool to achieve that. Note that tensorRT_Optimizatiom tool is wrapper on TRT APIs.

how to confirm whether the model is run in GPU?

You can use nsys to know API trace on GPU. tegrastats just show the GPU utilisation. You may compare the change in GPU usage before and after launching DL model to know the GPU usage.

Hi, I tried the nsys command, but unable to run this in the DRIVE AGX Orin. This is the error I get.
Screenshot from 2024-03-04 16-10-20

Can you help on how to use this tool?

Also these questions left unanswered.

  • Is model compilation through tensorRTOptimizationTool necessary for the GPU to be used?
    If not, what are the other methods?

  • Can models be run in DRIVE AGX Orin through python, AND use the device GPU? If so, how?

  • if tegrastats is used for GPU usage, then how to know whether GPU is used by video pipeline/render or by the deep learning model inferencing?

  • As an added question, can the GPU be used directly by the ONNX model (using onnxruntime), without converting to tensorRT model?
    Thanks,
    Gokul

Dear @gokul.soman,
Very first time, you need to connect to target remotely from host. It installs the needed binaries and libs that are to run nsys on target directly.

  1. GPU is used to run any CUDA task. If you want to run a CUDA application, you can compile it using nvcc and generate the executable to run CUDA tasks on GPU. But if you want to run your DL model on GPU, you may directly use TRT APIs, trtexec to generate TRT engine that can use GPU. TensorRTOptimization tool is used only if you want to use DW DNN APIs to deploy your model
  2. Yes. We have TRT Python samples
  3. iGPU utilization shows both display and GPU utilisation. But GPU utilisation with tegrastats shows low. You can ignore it. It just tells GPU is in use.
  4. Yes

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.