TensorRT Jetson Nano ONNX Inference

Hi everyone good morning,
I did the exercise for use YOLO on my Jetson Nano with “OpenCV dnn module and CUDA” and the results was not good becasue the inferences was poor.

Someone suggested me to check


for TensorRT and what a difference!!

I have readed about TensorRT, I have followed the Github and I have had sucess but now I am looking for info for HOW to use the “ENGINE” that results of the process "“ONNX TO TENSORRT”.
I mean, the demos for test YOLO with TENSORRT works good but they call anothers scripts, use utilities, etc.

I woul like to know how to LOAD or USE this ENGINE and use this on a really basic script for detection. I have headed that I need to make a CONTEXT and so on.

Do you have some idea?
Thanks in advance.

#TensorRT #Inference #jetson-embedded-systems:jetson-nano

TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi @rodolfo.betanzos,
You can save the engine using --saveEngine argument, and can later load it using --loadEngine argument.
Please refer to the below link to understand it better.


Alternatively you can use refer to the below link.
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#c_topics
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#python_topics
Thanks!