I noticed with
tegrastats that it only provides a clock speed for the NVENC and NVDEC engines, unlike other things like the VIC where it also provides a percentage utilisation. See for example this snippet, taken from when the encode and decode engines were both active:
NVENC 448 NVDEC 998 NVJPG1 729 VIC_FREQ 11%@115
I have found that if I read
/sys/kernel/debug/vic/actmon_avg_norm and divide by 10, it matches the
tegrastats value for VIC utilisation percentage, implying that
actmon_avg_norm is a value between 0 and 1000. This seems to align with the
*avg = (val * 1000) / (actmon->clks_per_sample * actmon->divider);
/sys/kernel/debug/nvdec/actmon_avg_normfrequently goes above 1000. The plot below shows
/sys/kernel/debug/nvdec/actmon_avg_norm sampled every 20ms. The dots are individual measurements, and the line is a 5 second rolling mean.
The values for NVENC seem more reasonable.
The high-level question I have is:
- is there a way to reliably monitor NVENC and NVDEC utilisation, in the same way that
tegrastatsreports for the VIC?
- Am I correct in saying that
actmon_avg_normshould be an integer between 0 and 1000?
- If so, what is the reason for the NVDEC values going above 1000? Can I trust these values at all?