Pruning Without Improvement

Hello,

I have trained a model with and without doing pruning, with a target sparsity of 0.6 and 0.9.
After that, I have used trtexec to make the inference on Xavier with JetPack 4.5.1. Is it normal that the values of energy and time, are almost the same with and without pruning?

I include two graphs, one for time and one for energy.

Host Latency. Mean (ms)
Total Energy

Thanks, Paula

Hi,
This looks like a Jetson issue. We recommend you to raise it to the respective platform from the below link

Thanks!

Moved!

Thanks you!

Hi,

This depends on the pruning algorithm.

For example, if you pruning a convolution kernel with half value into zero.
Since GPU is SIMD, the convolution is calculated in parallel.
It won’t be too much different if some of the value is zero.

We do have a toolkit to apply GPU-friendly pruning.
It’s recommended to give it a try:
https://developer.nvidia.com/transfer-learning-toolkit

Thanks.

Thanks you!

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.