TensrorRT on Jetson Nano

I’m using jetson nano to make DNN inference. I have converted the model to onnx and using the command ‘$ /usr/src/tensorrt/bin/trtexec --onnx=[model]’ , Now my question is, how do I pass the images (in batch) to this runtime to make actual inference? Thanks in advance.

Hi @MihirJog, you can reference the TensorRT code examples found on your Jetson here:

  • C++ /usr/src/tensorrt/samples
  • Python /usr/src/tensorrt/samples/python

You can also find the TensorRT documentation here:

There are also various projects on GitHub using TensorRT, that you may be able to adapt if you’re using a common type of network. There is also DeepStream and Triton Inference Server which handle batch processing with TensorRT if you prefer to use something higher-level.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.