Please provide the following info (tick the boxes after creating this topic): Software Version
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
[*] DRIVE OS 6.0.4 SDK
Target Operating System
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
SDK Manager Version
Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
When I ran our deep learning model using tensorrt’s DLA0 and DLA1, it was clear that only a few operators were running on the GPU, and we found that GR3D_FREQ 40% when we tested with tegrastats.
However, using Nsight Systems analysis, we found that GPU resources take up very little resources and time, as shown in the figure below.
Do you see any difference with changing interval parameter in tegrastats?
Nsight here shows timeline view of application execution. Tegrastats shows the how much GPU is in use when it is sampled at that time. The display output gets refreshed in certain interval time. It does not indicate if the GPU is use constantly.