I’m using jetson nano to make DNN inference. I have converted the model to onnx and using the command ‘$ /usr/src/tensorrt/bin/trtexec --onnx=[model]’ , Now my question is, how do I pass the images (in batch) to this runtime to make actual inference? Thanks in advance.
Hi @MihirJog, you can reference the TensorRT code examples found on your Jetson here:
- C++
/usr/src/tensorrt/samples
- Python
/usr/src/tensorrt/samples/python
You can also find the TensorRT documentation here:
- Developer Guide https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html
- API Reference https://docs.nvidia.com/deeplearning/tensorrt/api/index.html
There are also various projects on GitHub using TensorRT, that you may be able to adapt if you’re using a common type of network. There is also DeepStream and Triton Inference Server which handle batch processing with TensorRT if you prefer to use something higher-level.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.