Yolov3 with TensorRT 5

I am sorry if this is not the correct place to ask this question but i have looked everywhere.

Already installed
Cuda 10
Tensort RT 5

I have been working with yolo for a while now and i am trying to run Yolov3 with Tensor RT 5 using c++ on a single image to see the detection. If you have a sample code for that it would help alot.

Thanks.

Hello,

We have a few references:

Please reference (YOLOv2) Accelerating Large-Scale Object Detection with TensorRT | NVIDIA Technical Blog and to make it work for YOLOV3, implement neural-net layers which are not supported in TensorRT5 as custom plug-in layers.

We also have an yolov3_onnx example at: https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#yolov3_onnx

regards

1 Like
  1. So SIDNET is not available to training and inference as open source ?
  2. Yolov3 to onnx to trt… Can you provide a sample in c++ to run call trt files on images ?
    i am not interested in python.

The process_yolo_output method in data_processing.py is taking 1 second for each frame on my 1050 ti on video which makes it even slower than the original implementation of yolov3 by the original author…

Thanks.

1 Like

Hi,
Check this

I succeed to run this on my jetson xavier board directly.
And it gives me a 20 fps for an input image with 640 * 480 resolution.

Interesting - how do you convert darknet weights into a caffe model?

I don’t. I was using the caffe model which is in this repo. This caffe model is just converted from the original yolov3 model by this repo’s owner.

Hi,

Has anyone implemented C++ inference code to run Darknet ONNX converted model ?
I have seen https://github.com/lewes6369/TensorRT-Yolov3 repository but this has implemented caffe model.

THanks

Hi there,

here is a Repository that has Yolo and the Postprocessing implemented in Cuda C.

It should be possible to use it with onnx. But i could not compile it yet.

I am struck in a problem, I was trying to perform prediction of my customized YOLO model with tensorrt. I was struck in the below step (converting the yolo to onnx model). I am having my own yolov3.weights whose size i need to optimize using tensorrt

Location of below file yolov3_to_onnx.py is in /usr/src/tensorrt/samples/python/

python2 yolov3_to_onnx.py
Error 

Traceback (most recent call last):
File "yolov3_to_onnx.py", line 812, in <module>
main()
File "yolov3_to_onnx.py", line 793, in main
'c84e5b99d0e52cd466ae710cadf6d84c')
File "yolov3_to_onnx.py", line 750, in download_file
(local_path, checksum_reference))
ValueError: The MD5 checksum of local file yolov3.weights differs from c84e5b99d0e52cd466ae710cadf6d84c, please manually remove the file and try again

My question is
How to find MD5 checksum of local file yolov3.weights as it is having the checksum of pretrained yolov3.weights. From local file i means the weights for my custom yolov3 model.

Please help me for this. I would be highly grateful to you.

You could change this
cfg_file_path = download_file(
‘yolov3.cfg’,
https://raw.githubusercontent.com/pjreddie/darknet/f86901f6177dfc6116360a13cc06ab680e0c86b0/cfg/yolov3.cfg’,
‘b969a43a848bbf26901643b833cfb96c’)

into
cfg_file_path = “/path/to/your/cfg”

You could skip the download process and just read the config of caffe.

There is tensorrt yolov3 plugin here: https://github.com/AlexeyAB/deepstream-plugins/tree/master/sources/gst-yoloplugin/yoloplugin_lib.

here is a encapsulation of tensorrt yolo official implementation https://github.com/enazoe/yolo-tensorrt

check this https://github.com/enazoe/yolo-tensorrt

You can use MobileNet-Yolov3 project for converting darknet2caffe.

compile project GitHub - eric612/MobileNet-YOLO: A caffe implementation of MobileNet-YOLO detection network

and then use this python script darknet2caffe.py