How to monitor GPU utilization on Jetson-TK1

Hello,

I am currently working on an image processing application using OpenCV4Tegra on Jetson-TK1, and I currently lack informations about how much my GPU is used.

nvidia-smi does not seem to be available on tegra, and included softwares like tegrastats is just not precise enough.

Is there any solution that would help me monitor the load of my GPU ?

Thanks in advance

You may use tegrastats as sudo. This executable is located in your home directory:

sudo ./tegrastats

The GPU stats are the GR3D ones. 1st number is GPU usage and the second (after @) is the current GPU frequency.

So, if Tegrastats is really a reliable tool, it means that the most efficient OpenCV4Tegra code never uses GPU, except for displaying. That seems to be kind of a huge waste, for a product like the Jetson-TK1, whose GPU sounds like quite a huge argument in order to use it.

It is pretty disappointing.

Thank you for your quick reply, Marc_S.

I haven’t used OpenCV4Tegra yet, but I guess it depends whether the source code tells to specifically use the GPU using CUDA or not. If my memory serves me right, OpenCV4Tegra is a subset of OpenCV with special care for some process, with respect to the K1’s architecture.

So, I would guess that in order to use the GPU as a GPGPU, you would still have to write something special for the OpenCV code, such as gpu::the_opencv_method_you_want_to_use.

If anyone could correct me if I say something stupid that would be great. But I think that’s what is going on here.

IsidoreLechamalin did you look what is the CPU usage during your tests ? Maybe only the CPU is doing the work in your test.

Now if the code you tested is supposed to be highly relying on the GPU as we would like with the K1, maybe the process is not heavy enough to see significant use of the GPU.

This was my 2 cents thought on the matter.

Cheers

You are right on the fact that to use GPU and CUDA, you have to use the ‘gpu’ namespace. :)

Actually I tested both CPU and GPU APIs, and I was really disappointed by the performance of the GPU one. So I used the CPU API, and realized it was way more efficient.
I had a way better framerate and almost no latency when making image processing on my video stream with OpenCV4Tegra’s CPU API.

I concluded that NVIdia worked a lot on optimizing the CPU API, and did not work at all on the GPU one. I wanted to check if the CPU API used the GPU, and that is why I created this thread, since I was a bit surprised that tegrastats indicated that the GPU was absolutely not used :)

Thank you anyway for your help !

1 Like

To monitor GPU utilization I just view the current load of the GPU by checking in on this file.

cat /sys/devices/platform/host1x/gk20a.0/load

If I want to monitor it, I use watch with the file.

watch -n 1 cat /sys/devices/platform/host1x/gk20a.0/load

I am interested if anyone has figured out a similar way to monitor CPU load from the command line.

Thank you really much for this information @spencer_k .

Can you tell me what the unit of the output is? So when I run the cat command I get numbers between 400 and 700 . But what does it mean?

Thank you in advance