TX2, GPU utilization rate, tegrastats

On my TX2, I can use “sudo ~/tegrastats” to get GPU utilization in real time, like below

RAM 4722/7844MB (lfb 1x512kB) CPU [55%@2035,14%@2034,27%@2034,55%@2035,47%@2035,45%@2035] EMC_FREQ 2%@1866 GR3D_FREQ 71%@1300
APE 150 MTS fg 0% bg 0% BCPU@45C MCPU@45C GPU@51C PLL@45C AO@47.5C Tboard@37C Tdiode@46.75C PMIC@100C thermal@46.4C VDD_IN 14025/14416 VDD_CPU 2209/2538 VDD_GPU 6854/6903 VDD_SOC 1371/1370 VDD_WIFI 19/19 VDD_DDR 2702/2702

I think GR3D_FREQ 71%@1300 is the GPU utilization rate.

However real time utilization cannot reflect the overall utilization, it may fluctuate from 10% - 99 %, is there a way to get the average utilization or reasonable utilization rate of the GPU?

What I did is that collect the printed out utilization status for 2 minutes and then calculate the average of percentage(GR3D_FREQ ??%@1300), but this seems not a smart way, is there a better approach to get the average utilization rate of GPU?

Hi heyworld, if you desire to compute a different usage metric, you may be interested in reading the GPU load information directly from Linux.
See this post for more info:


If you are curious about the meaning of the tegrastats print-out, it is documented in the L4T Documentation under the Utilities section.

I understand “sudo /home/ubuntu/tegrastats” could print out the GPU usage stats in some intervals set by user, but what I am interested in is that how I can get the average usage rate during, such as 5 minutes, is it doable with sudo ~/tegrastats with some arguments?

You could try the --interval argument to tegrastats, which accepts the number of milliseconds, like so:

$ sudo ~/tegrastats --interval 5000

This command probes every 5 seconds, for example. I’m not sure if/how it averages over the interval, if the behavior is not as you wish, you may be interested in this gtop tool developed from the community, which uses the raw file I/O method that I mentioned above:


Hi, I write a tool to get the running log of tegra and make a line chart of the cpus and gpu’s status.

Nice!! Thanks for sharing!

I’m currently working with Jetson and I approached a similar problem.
The real problem with GPU measurement is that it works in a very high frequency, therefore, any measurements will be a rough approximation of the reality.

I checked @FindHao 's solution and it’s very good!

If I had to do a GPU approach, I would run tegrastat with inverval 5000-10000 and save the log in a .txt file. Then I would create a shell script to cut the GPU metrics. Finally, do a little math in order to find the mean GPU usage.

Still, FindHao did a great job!