I am trying to make profile for the Jetson Nano components independently. While most of the components instant power may vary depending on the load (e.g. CPU power consumption depends on the % usage of it).
When modeling the GPU, I found out that the actual power consumption (reported by TegraStats utility) when making inferences is constant at a maximum. For the inference I use TensorRT framework. I tried with full models of 120 layers depth and with a single layer model, the power reported is still the same. Energy will change depending on the inference performance, but one would expect not to need the same amount of cores for such different loads, I may by wrong though. Is this behavior expected?
Thank you in advance