Why does nvpmodel -m0 make GPU usage very low?

I’m running YOLOV3 TensorRT stand-alone app in my Jetson TX2. Without optimizing the TX2 I got 50~60ms inference for image classification with 60~99% GPU usage.
The interesting thing is, after I executed

sudo nvpmodel -m 0

I got very fast inference time around 17~18ms, but the GPU usage was very low, down to around 5%. Why is that?

Note:
I checked the GPU usage using tegrastats and gpuGraphTX python program.
I can’t use jetson_clocks.sh due to the absence of fan

Hi,

After updating the device mode, please remember to lock the clocks into the maximal:

sudo jetson_clocks

Or the clock will be reset to the dynamic after executing the nvpmodel.

Thanks.

@AastaLLL hi, the problem is if I run that command, I will get :

can't access fan

Do I always have to sudo jetson_clocks after sudo nvpmodel?

Hi,

YES, you will need to maximal the clock each time after setting the nvpmodel mode.

Here is a related topic for the fan accessing issue.
The user fixed it after reflashing the device. Would you mind to check it first?
[url]https://devtalk.nvidia.com/default/topic/1026123/jetson-tx2/the-tx2-jetpack-3-0-can-t-access-fan/[/url]

Thanks.

@AastaLLL thank you. But still, I’m curious why running nvpmodel -m0 speeds up the inference time but makes GPU usage lower than usual?

FYI, the model only changes the available range of clocks and voltages (the DVFS table). It has no bearing on the actual currently selected clock. If the clocks are such that they have idled back, and on-demand use can trigger a higher clock, then the higher clock won’t occur until time under load has passed. If GPU requires data to feed it, then I would expect idled back speed to also reduce GPU use until clocks ramp up and have something to feed the GPU. Try without jetson_clocks, but put the system under a load and keep it under load for some time. Watch the load over time.

Hi,

It doesn’t make the GPU slower. It just reset the GPU clock into default mode which is dynamic.

Thanks.