Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) - GPU • DeepStream Version - 6.1.1 • JetPack Version (valid for Jetson only) -NA • TensorRT Version - • NVIDIA GPU Driver Version (valid for GPU only) - • Issue Type( questions, new requirements, bugs) - questions
** Issue** - Our team is trying to build the Deepstream application with Yolo model. We implemented the GitHub - marcoslucianops/DeepStream-Yolo: NVIDIA DeepStream SDK 6.1.1 / 6.1 / 6.0.1 / 6.0 configuration for YOLO models and built YOLOv5 engine. Could anyone please let me know if I can just use the yolo.cfg and yolo.wts without including the libnvdsinfer_custom_impl_Yolo.so file (custom-lib-path=/opt/nvidia/deepstream/deepstream-6.1/sources/yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so).
If not, what is the use of ibnvdsinfer_custom_impl_Yolo.so in this context?
I come from a Python background so, understanding CPP files or terms associated with it is becoming tough to maneuver through NVIDIA Deepstream.
Any resources/ answers are highly appreciated. Thank you so much :)
I changed the Network-type = 100, and commented the custom-lib-path. However, when I tested it did not give me the output. Could you please correct me where I am doing it wrong?
I did not quite understand what you meant by, “you can access output data NvDsInferLayerInfo”. Can you please elaborate on this? Could you also please explain me what does post-processing in Python language mean?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one.
Thanks
please refer to deepstream sample deepstream-infer-tensor-meta-test, here is the key setting: