Hi, I think it’s hard to believe that DLA is more energy efficient.
I used trtexec to examine inference time and energy consumption of model.
I used Resnet-50 caffe prototxt, but I removed last layer(softmax) and renamed last fully connected layer to prob, to avoid GPUfallback.
The command I used was something like this :
I examined power consumption with tegrastats. And calculated (img/sec)/(Total Power consumption).
But using GPU with FP16 was always more efficient than DLA with FP16. (I tried MAX N mode, and 15W mode. Both tested after sudo jetson_clocks)
Of course DLA’s power consumption was low, but VDDRQ, SOC, CPU all these stuffs also needed energy, and DLA’s inference time was slower, so total power consumption was bad.
Could you give me some examples that I can check DLA is more energy efficient??
Thanks.
=====================================
I’m really sorry.
I found if I use other networks, such as GoogleNet, using DLA can be more energy efficient.
In my experience, the main benefit of DLA is when you can run it in 8 bit mode. Assuming your model will still perform well at that resolution, the DLA can run reasonably fast and efficient.
If you don’t care about the CPU at all, try taking all but one CPU core offline, and turn down the memory/GPU clocks. You can actually define your own power profiles in the /etc/nvpmodel.conf file, and apply those.