Implement custom yolo or any custom model in Deepstream using Python

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) - GPU
• DeepStream Version - 6.1.1
• JetPack Version (valid for Jetson only) -NA
• TensorRT Version -
• NVIDIA GPU Driver Version (valid for GPU only) -
• Issue Type( questions, new requirements, bugs) - questions

** Issue** - Our team is trying to build the Deepstream application with Yolo model. We implemented the GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 6.1.1 / 6.1 / 6.0.1 / 6.0 configuration for YOLO models and built YOLOv5 engine. Could anyone please let me know if I can just use the yolo.cfg and yolo.wts without including the libnvdsinfer_custom_impl_Yolo.so file (custom-lib-path=/opt/nvidia/deepstream/deepstream-6.1/sources/yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so).

If not, what is the use of ibnvdsinfer_custom_impl_Yolo.so in this context?

I come from a Python background so, understanding CPP files or terms associated with it is becoming tough to maneuver through NVIDIA Deepstream.

Any resources/ answers are highly appreciated. Thank you so much :)

please refer to How to add custom post process after infer in deepstream python app - #8 by 328541716

Thanks for your reply. I have modified the config file in a below manner:

I changed the Network-type = 100, and commented the custom-lib-path. However, when I tested it did not give me the output. Could you please correct me where I am doing it wrong?

I did not quite understand what you meant by, “you can access output data NvDsInferLayerInfo”. Can you please elaborate on this? Could you also please explain me what does post-processing in Python language mean?

Thank you :)

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one.
Thanks

please refer to deepstream sample deepstream-infer-tensor-meta-test, here is the key setting:

0=Detector, 1=Classifier, 2=Segmentation, 100=Other

network-type=100

Enable tensor metadata output

output-tensor-meta=1
then inference results NvDsInferLayerInfo can be got in probe functions, you need to convert the postprocess code the python.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.