GPU Power Range In Watts?

Hi All,

What is the power range in watts for the Nano GPU? Thanks.

Tom

Hi, the total module power is about 5 ~ 25W, the detail power distribution is case by case, so for GPU power, the command tegrastat will show its number, also you can refer to power monitor configuration for more info of reading power number: https://developer.nvidia.com/embedded/dlc/jetson-tx1-tx2-voltage-current-config

For the “GPU” only what is the power range? I want to know how hard the GPU is working or not working.

If you run the tegrastats utility, it will report GPU utilization as GR3D and the GPU power consumption as the POM_5V_GPU rail. For more info, see the tegrastats output documentation here:

https://docs.nvidia.com/jetson/l4t/Tegra%20Linux%20Driver%20Package%20Development%20Guide/AppendixTegraStats.html#wwpID0E0SB0HA

You can also measure the INA voltage/current sensors directly through a script as shown in the document that Trumany linked to. Typical total module power is in the 5-10W range, depending on which nvpmodel profile you have set (10W mode is the default). If you have lots of power-hungry peripherals plugged into the devkit, the carrier can draw more power to support those devices.

I see no change in GR3D_FREQ:
RAM 2638/3964MB (lfb 3x2MB) SWAP 684/1982MB (cached 10MB) IRAM 0/252kB(lfb 252kB) CPU [2%@1428,2%@1428,2%@1428,5%@1428] EMC_FREQ 18%@1600 GR3D_FREQ 0%@921 NVDEC 396 APE 25 PLL@44.5C CPU@46.5C iwlwifi@41C PMIC@100C GPU@45.5C AO@50.5C thermal@46C POM_5V_IN 3698/3726 POM_5V_GPU 162/162 POM_5V_CPU 406/433

So should the GR3D_FREQ change if the HW decoder is being used and nothing else? The NVDEC does change as expected.

What item represents the GPU power and what is the range?

Thanks.

The video decoder doesn’t use GPU, it uses it’s own dedicated hardware (as is the case for the video encoder as well), so it shouldn’t change GR3D (unless you were to be rendering the video on a display, for example).

Try running a CUDA application while keeping an eye on tegrastats, like some of the CUDA samples you can find under the /usr/local/cuda/samples directory. The ones found under the 5_Simulations/ subdirectory are some of the more intensive ones, for example nbody and smokeParticles.

You can also characterize the power while running the deep learning inferencing benchmarks on Nano found here:
https://devtalk.nvidia.com/default/topic/1050377/jetson-nano/deep-learning-inference-benchmarking-instructions/