The TensorRT inference API consumes more CPU resources(Jetson Xavier NX)

There is no update from you for a period, assuming this is not an issue anymore.
Hence, we are closing this topic. If need further support, please open a new one.
Thanks

Hi,

Could you share the ONNX file that can be deployed with trtexec?
We want to reproduce this locally to check the usage further.

Thanks.