How to judge different rtsp sources of objectDetector_Yolo multi rtsp resources sample?

I use multi rtsp resources in deepstream_app_config_yoloV3_tiny.txt
I want to control some iot assembly by the different rtsp
I can run deepstream-app -c deepstream_app_config_yoloV3_tiny.txt by 2 rtsp sources
I modify the actions in nvdsinfer_custom_impl_Yolo/nvdsparsebbox_Yolo.cpp/static std::vectordecodeYoloV3Tensor(…
But I can’t find the way to distinguish what info from which rtsp resources?
How do I to judge different rtsp resources from deepstream objectDetector_Yolo project?
If anyone knows, please tell me, thanks

you can get “Source id” from the meta data.

Thanks!

1 Like

Hi mchi,
Thanks for your reply
But I dont know where to get the “Source id”? Could you tell me more details?
Such as which variable in which document?
Does it mean something in deepstream_app_config_yoloV3_tiny.txt?
How can I use it in nvdsparsebbox_Yolo.cpp/static std::vectordecodeYoloV3Tensor?
Or I need to use different Source id in another document? or another func?
cycheng

You may add probe after pgie nvinfer component, or other component, from there, you can get source id from _NvDsFrameMeta field source_id
you can refer to sources/apps/sample_apps/deepstream-user-metadata-test/deepstream_user_metadata_app.c::osd_sink_pad_buffer_probe
for how to get metadata info.

yes, you can also refer to code - deepstream_reference_apps/back_to_back_detectors.c at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub , the source_id is in NvDsFrameMeta structure

Hi amycao & mchi,

Thanks for your reply,
It seems I need to add probe part to detector the  yolo judge result
Here is what I exactly want to do : https://forums.developer.nvidia.com/t/how-to-draw-marker-box-in-the-ourput-video-of-deepstream-yolov3-project/125847 ( Judge two rtsp source by YoloV3 model to control IOT device?)
It seem very complicated and spend time to rewrite this part 
Because I need to very clarify the structure of deepstream sdk & GStreamer?
I appreciate your reply, But I just want to judge the develope time? 

Because I use deepstream sdk is want to more simple to achieve my goal

But in Yolo case , It seems I need to spend more time to develope?
It seems I can start from mchi answer? But I still dont know the relationship of deepstream-app -c ..... and modify from different source id? I think I need to combine Deepstream ObjectDetector_Yolo and NVIDIA-AI-IOT/deepstream_reference_apps if I start from mchi solution? 

And I also wonder If I did probe part like amycao’s answer, If seems like to reprobe the yolo result… is it’s efficiency OK?
I wonder to clarify the relationship of “Deepstream ObjectDetector_Yolo sample” & “deepstream-app-c …” & “osd probe(amycho)” & “NVIDIA-AI-IOT/deepstream_reference_apps(mchi)”? In my opinion, I need to combine ObjectDetector_Yolo sample & NVIDIA-AI-IOT/deepstream_reference_apps, and maybe I also need to add osd probe? And I maybe get my goal after I did these actions?

Is deepstream sdk suitable for my goal now? Judge two rtsp source by YoloV3 model to control IOT device? In my develope process, I only can run YoloV3_tiny Model now, I wonder if I did the combine and probe part, the efficiency of yolo model is suitable for my goal? Because I don’t want to spend very much time and at last the efficiency is very low that maybe can’t get my goal? I think my customers also can’t allow me to waste time to do these? Or maybe it is just my misunderstanding? Can you provide me any suggestions?
Thanks a lot

Best regards,

cycheng0122

Hi @cycheng0122,
No matter which way, essentially, they are the same, they firstly create the gstream components and properities (some roperities are read from configure file) and link them in sequence.

Is your pipeline like below? if it’s similar, I think deepstream is good for you and make the best of NVIDIA platform .

Stream#A → decoding -->|
Stream#B → decoding → | ----> nvstreammux (bs=2) —> nvinfer (with yolov3_tiny) → (add probe on OSD sink to identify the streams and action accordionly) osd → display/or other action

Certainly, the probe can be added on other components besides OSD.

Hi mchi,

Thanks, I will study it

Best regards,

cycheng0122

Hi @cycheng0122,
May I have your setup info like below?

• Hardware Platform (dGPU) : aws ec2 g4dn.xlarge
• DeepStream Version ; deepstream_sdk_v4.0.2
• TensorRT Version : 5.0
• NVIDIA GPU Driver Version (valid for GPU only) : 440.82

Thanks!

Hi mchi,

 Please refer as below, thanks
• Hardware Platform (dGPU) : jetson nano
• DeepStream Version ; deepstream_sdk_v4.0.2
• TensorRT Version : 6.0.1.10

Thanks

1 Like

We hope every user could provide the informantion about platform, Jetpack version, DeepStream version

**• Hardware Platform (dGPU) : **
**• DeepStream Version ; **
**• TensorRT Version : **
**• NVIDIA GPU Driver Version (valid for GPU only) : **

I had edited my answer, please refer last time reply, thanks