Hi everyone good morning,
I did the exercise for use YOLO on my Jetson Nano with “OpenCV dnn module and CUDA” and the results was not good becasue the inferences was poor.
Someone suggested me to check
for TensorRT and what a difference!!
I have readed about TensorRT, I have followed the Github and I have had sucess but now I am looking for info for HOW to use the “ENGINE” that results of the process "“ONNX TO TENSORRT”.
I mean, the demos for test YOLO with TENSORRT works good but they call anothers scripts, use utilities, etc.
I woul like to know how to LOAD or USE this ENGINE and use this on a really basic script for detection. I have headed that I need to make a CONTEXT and so on.
Do you have some idea?
Thanks in advance.
#TensorRT #Inference #jetson-embedded-systems:jetson-nano
Nvidia Driver Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered