Record ram & gpu mem usage when inferring a model

Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
Linux
QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
1.9.0.10816
other

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

Hi, I’d like to record ram usage and gpu mem usage in orin-linux system when inferring a plan file, Is there a solution to do so?
I’ve used vmRSS in /proc/self/status and cudaGetMemInfo api to calculate this.
I do this in the following process.

  1. enter main, record ram usage of current process and gpu mem currently used in the system.
  2. deserialize a planfile and run a inference.
  3. do the same thing with step 1.
  4. compute gpu mem usage diff of step 3 and 1, which includes ram usage.

so the ram usage for current process is got from vmRSS,
and gpu mem usage is the diff in step 4 minus ram usage. but the final result is I got minus gpu mem usage.

Am I doing it right? if not, plz help me figure out a right solution.
Many thanks.

Dear @JeremyYuan,
This forum is for developers in NVIDIA DRIVE™ AGX SDK Developer Program . We will need you to use your account with corporate or university email address for further support. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.