Questions about how to deploy yolov4_tiny in deepstream

I referenced @kayccc YOLOV4 example in DeepStream - #3

This link cannot handle the problem of yolov4_tiny. I want to use the darknet model of v4_tiny or the pytorch model in deepstream. I found that the darknet model of yolov4_tiny and the model trained by pytorch cannot be converted to onnx. How do I deal with it?

Sorry for the late response, we will do the investigation soon.

Regarding exporting the model to onnx, you could consult in the TRT forum or the related github.


Thank you for your attention, if you find a good solution, please let me know, thank you



i find a repo can work, from darknet to onnx to tensorrt , could you try to verify it?
we find a github repo about yolov4-tiny in tensorrt_inference/Yolov4 at master · linghu8812/tensorrt_inference · GitHub.
then we download the repo :

git clone
then download the weight of yolov4-tiny follow the of the repo:

follow the readme,convert the model to onnx:
python3 --cfg_file cfg/yolov4-tiny.cfg --weights_file yolov4-tiny.weights --output_file yolov4-tiny.onnx --strides 32 16 --neck FPN
use trtexec to convert the onnx model to trt plan:
trtexec --onnx=yolov4-tiny.onnx --saveEngine=yolov4-tiny.plan --verbose

and i found another repo about this, you can also try it.

Tianxiaomo/pytorch-YOLOv4: PyTorch ,ONNX and TensorRT implementation of YOLOv4 (

Thanks, but what I need is the v4-tiny conversion of pytorch training