I’m running YOLOV3 TensorRT stand-alone app in my Jetson TX2. Without optimizing the TX2 I got 50~60ms inference for image classification with 60~99% GPU usage.
The interesting thing is, after I executed
sudo nvpmodel -m 0
I got very fast inference time around 17~18ms, but the GPU usage was very low, down to around 5%. Why is that?
I checked the GPU usage using tegrastats and gpuGraphTX python program.
I can’t use jetson_clocks.sh due to the absence of fan