Jetson Nano: How to check GPU memory usage during model training?

Let’s say you are training model or do some GPU manipulations.
How you can check GPU memory remaining in Jetson Nano using Python?
Ideal scenario is to use some functions available e.g. in numba, tensorflow, pytorch, etc.

from something import something_showing_Jetson_Nano_GPU_Memory
total, used, remaining = something_showing_Jetson_Nano_GPU_Memory()
print(f"total {total} GB used {used} GB remaining {remaining} GB")
    total 4.00 GB used 3.51 GB remaining 0.47 GB

Hi,

We have a binary that can show GPU utilization directly.
For example: GR3D_FREQ 40%@1377

$ sudo tegrastats
RAM 3215/31921MB (lfb 6739x4MB) SWAP 0/15960MB (cached 0MB) CPU [1%@2265,0%@2265,0%@2265,0%@2265,0%@2265,71%@2265,0%@2265,3%@2265] EMC_FREQ 1%@2133 GR3D_FREQ 40%@1377 VIC_FREQ 0%@115 APE 150 MTS fg 0% bg 8% AO@38.5C GPU@39.5C Tdiode@44.5C PMIC@100C AUX@38C CPU@40.5C thermal@39.05C Tboard@40C GPU 3654/1760 CPU 1979/1387 SOC 2283/2301 CV 0/0 VDDRQ 457/457 SYS5V 2224/2224
...

Thanks.

Many thanks! Where I can find source code for tegrastats?

It looks like memory information is available in /proc/meminfo. Details are here https://github.com/ItsSiddharth/Py_Monitor_JetsonTX2/blob/master/Py_Monitor_JetsonTX2/__init__.py

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.