Power consumption for GPU after offloading to DLA

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) agx orin
• DeepStream Version 6.2
**• JetPack Version (valid for Jetson only)**5.1
• TensorRT Version8.5.2
I’m trying to compare the power consumption and GPU/CPU loading once running models on DLA. As per documentation that DLA is expected to decrease power consumption by 3~5 compared to GPU.
However in my case power consumption (VDD_GPU_SOC-AVG_POWER) decreases by only around 20mW (3537 mW → 3522 mW) and GPU load (IGPU0-CURR_LOAD) increases for FaceDetect model although no layer falls back to the GPU and all layer are executed in DLA. Is there any reason for this? I have repeated the tests several times and i’m getting the very close results. Additionally i found that CPU power consumption (VDD_CPU_CV-AVG_POWER) also increases when enabling the DLA and i can’t find how CPU is affected from offloading the GPU.

support please…

Hi,

Which nvpmodel do you use? Have you applied the jetson_clocks?
If your model can fully run on DLA, the GPU clocks can be set to a lower amount without affecting the performance.

Thanks.

i’m setting the devide to mode 0 MAXN performance. you mean if i set it to lower performance i will be able to visualize the power consumption change?
additionally does changing the power mode affect anything in device? like is there any possibility it removes some files or some installed libraries?

Hi,

Update the nvpmodel indicating the CPU/GPU clock will be changed.
Since you don’t use GPU for inference (DLA instead), the change should not affect the inference performance.

You can also try to stay on the MAXN but leave the clock to be adjusted dynamically.
(Dynamic clock is the default setting so just don’t apply the jetson_clocks)
Ideally, you should see the GPU clock is lower (so save power) when inferring with DLA.

Thanks.

Thank you very much for clarification.

I’m using the GPU to run another model but while running the GPU load do not exceed 30% which means it is not at all fully utilized. Reducing the nvpmodel do you think may it affect the performance?