Help with implementing Custom trained Yolo model for inference

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) - GPU
• DeepStream Version - 6.1.1
• JetPack Version (valid for Jetson only) - NA
• TensorRT Version -
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)- Questions

**• Issue : We are trying to build custom Yolo code using deepstream. We have trained our own custom Yolo V5 model in which the weights are saved in .pt format. We want to use this custom trained model weights during the inference in deepstream without using the already existing deepstream yolo code and weights.

We are trying to use the code given in the below link for Yolo inference. But the problem is that this code uses pre-trained yolo weights mentioned in the config files. Can you please suggest us what changes do we need to make in the config files in the below link so that we can use our own custom model in it rather than use the pre-existing yolo weights?

[deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out at f70dcc966d3a7db5389425d725f056a9a3899b84 · NVIDIA-AI-IOT/deepstream_python_apps · GitHub]

Thank you very much in advance!

PS : We have been trying to do this since the past few weeks but found no luck. It would be really helpful if someone can please guide us on the exact changes that we need to make in the config files so that we can use the custom yolo model for inference :)

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one.

  1. deepstream nvinfer plugin can not process pt model directly, please refer to Gst-nvinfer — DeepStream 6.1.1 Release documentation, you can convert it onnx.
  2. you can modify the configuration file, here is a sample: deepstream_tao_apps/pgie_yolov5_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub