How to integrate MQTT broker with multi stream in and out

Please provide complete information as applicable to your setup.

• Hardware Platform (GPU)
• DeepStream Version 6.4
• NVIDIA GPU Driver Version (valid for GPU only) NVIDIA GeForce GTX 1650 / Driver Version: 525.147.05 / CUDA Version: 12.0
• Issue Type( questions, new requirements, bugs) question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello,

I have build a pipeline which is working (graph.pdf (30.5 KB)) but I am wondering if this is right architecture.

I have multiple streams which I consume and multiple streams I generate as hls. However, if do

nvinfer -> nvtracker -> nvvideoconvert -> osd -> tee

one path of tee goes to msgbroker
one path of tee goes to demux -> for each stream create hls

However, if I use it like this, demux cannot work properly on osd.

There I have chosen another architecture (see pdf). However, I need to do the nvvideoconvert and osd operations twice. But theoretically, I do not do it, when I could directly prcess the GST Buffer for msgbroker.

Should I then add another probe to the tracker itself?

please refer to the sample pipeline in the doc. if still can’t work, please share the simplified gst-launch command-line.

please refer to the following command-line. it only do osd once.

gst-launch-1.0 -e nvstreammux name=mux batch-size=2 width=1920 height=1080 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt batch-size=2  ! nvdsosd ! nvstreamdemux name=demux  filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! queue ! mux.sink_0 filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! queue ! mux.sink_1   demux.src_0 ! queue ! nveglglessink demux.src_1 ! queue ! nveglglessink

Hi,

many thanks for the example - I try it out. However, when I started to this, I had already raised thsi question with regards to the Demux and where to put the osd part - here is the link of the forum: Pipeline Achitecture - Sometimes freezes

Your colleague said,that I should osd after the the demux part as I had issues because the osd part worked only one video file, and the other one did not get annotated.

And also referring to this code example: deepstream_python_apps/apps/deepstream-demux-multi-in-multi-out/deepstream_demux_multi_in_multi_out.py at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub

Can you advice?

There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks.

did you try it out? dose the pipeline I shared work well? is it helpful for the " I need to do the nvvideoconvert and osd operations twice."?
please open a new topic if having other DeepStream problems.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.