How to convert general yolov2,yoloV3 model to be used in deepstream

Hi,
I want to know should we convert general yolov2,yoloV3 model to be used in deepstream

**• Hardware Platform (Jetson / GPU)**T4
• DeepStream Version 4.0
• JetPack Version (valid for Jetson only)
• TensorRT Version 5.1.5
**• NVIDIA GPU Driver Version (valid for GPU only)**450.36.06

Hi,

Please check our YOLO sample located at /opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo directly.
Thanks.

Yes I have checked.So here we are directly giving the YOLO model and its .cfg file as input to the config_infer_primary_yoloV*.txt. Can we give any Yolo model and cfg file trained from Darknet.

Also can you help in what these specific terms mean and its use and whether it is necessary
1)parse-bbox-func-name=NvDsInferParseCustomYoloV2
2)custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so

Hi,

If your model is YOLOv2 or YOLOv3, you can customize it with this document:


parse-bbox-func-name and custom-lib-path are the information for output bounding box parser.
For the same YOLO architecture, you can use the default implementation directly.

Thanks.