Inference Yolov5 with TensorRT and C++

Hi
I have converted the yolov5 model to a tensorRT engine and inference with python.
but when I run the model with python, the model runs slightly slower.
also, I’m using Jetson TX1 that is not have high performance
how can I infer the yolov5 model that converted to TensorRT in C++ for speed improvement?
yolov5 repo: link
inference command: python3 detect.py --weights yolov5m.engine --source 1
CUDA : 10.2
TensorRT = 8.2
cuDNN: 8.2
Jetson TX1

Hi,

Is there any plugin layer used for your model?
If not, you can deploy it with trtexec to get the roughly performance first.

$ /usr/src/tensorrt/bin/trtexec --loadEngine=<file>

The source code of trtexec can be found in the below repository:

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.