I’m doing inference of a modal on Jetson Xavier NX. Is there any way to use all the cuda cores on Inference & to make it faster?
It’s recommended to check the bottleneck of your app first.
It’s common that the pipeline is blocked by the slow input pre-processing.
To improve this, you can use an accelerated multimedia API instead.
For example: NVIDIA DeepStream SDK | NVIDIA Developer