Run YOLOv5 in Deepstream with .engine generated alternatively

Why don’t you transfer the pytorch model to ONNX?

It is not true. There is “custom-network-config” and “model-file” parameters with gst-nvinfer configuration. Please refer to the document “Gst-nvinfer — DeepStream 6.3 Release documentation

There are Yolov2 and Yolov3 models samples of configuring .cfg and .wts files with customized model parser. /opt/nvidia/deepstream/deepstream/sources/objectDetector_Yolo.

We will suggest you to convert the pytorch model to ONNX model which can be deployed with DeepStream directly without any customized model parser.

There are also some third party YolovX DeepStream deployment samples: DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums