Using nvmsgconv and nvmsgbroker in RTSP pipeline

• Hardware Platform (Jetson / GPU)
Jetson Orin
• DeepStream Version
6.1.1
• JetPack Version (valid for Jetson only)
5.0.2
• TensorRT Version
8.4.1-1+cuda11.4

I am modifying deepstream_imagedata-multistream_redaction.py to extract the metadata using Kafka like it is implemented in deepstream_test_4.py.
deepstream_test_4.py sample runs fine and I can send the metadata through Kafka but when I run the modified deepstream_imagedata-multistream_redaction.py I don’t get any metadata but the RTSP stream works fine and shows all detections. No error messages.

My pipeline looks like this:

   print("Linking elements in the Pipeline \n")
    streammux.link(pgie)
    # pgie.link(nvvidconv1)
    pgie.link(tracker)
    tracker.link(sgie1)
    sgie1.link(sgie2)
    sgie2.link(nvvidconv1)
    nvvidconv1.link(filter1)
    filter1.link(tiler)
    tiler.link(nvvidconv)
    nvvidconv.link(nvosd)
    
    nvosd.link(tee)
    
    queue1.link(msgconv)
    msgconv.link(msgbroker)

    queue2.link(nvvidconv_postosd)
    # nvosd.link(nvvidconv_postosd)
    nvvidconv_postosd.link(caps)
    caps.link(encoder)
    encoder.link(rtppay)
    rtppay.link(sink)

    sink_pad = queue1.get_static_pad("sink")
    tee_msg_pad = tee.get_request_pad('src_%u')
    tee_render_pad = tee.get_request_pad("src_%u")
    if not tee_msg_pad or not tee_render_pad:
        sys.stderr.write("Unable to get request pads\n")
    tee_msg_pad.link(sink_pad)
    sink_pad = queue2.get_static_pad("sink")
    tee_render_pad.link(sink_pad)

How can I fix this?

EDIT:
Fixed by changing
tiler_sink_pad = tiler.get_static_pad("sink")
to:
tiler_sink_pad = nvosd.get_static_pad("sink")
But when I test with multiple streams it is only sending data from one stream

The streams will combine together to one stream after the nvstreammux. So when you send datas after osd, it’s the combination of all your sources.

You are right! When I save the images I get images from both streams.

I would like to know why the values of frame_meta.frame_num and frame_meta.pad_index are always **0**

 print("Frame Number is ", frame_meta.frame_num)
 print("pad_index is ", frame_meta.pad_index)
Frame Number is  0
pad_index is  0
Frame Number is  0
pad_index is  0
Frame Number is  0
pad_index is  0
Frame Number is  0
pad_index is  0
Frame Number is  0
pad_index is  0
Frame Number is  0
pad_index is  0
Frame Number is  0
pad_index is  0
Frame Number is  0
pad_index is  0
Frame Number is  0
pad_index is  0
Frame Number is  0
pad_index is  0
Frame Number is  0
pad_index is  0
Frame Number is  0
pad_index is  0
Frame Number is  0
pad_index is  0
Frame Number is  0

Let me clarify, the issue I am having is that one of my secondary detectors is doing OCR, if the stream is combined into one, how can I identify Which OCR values belong to the correct stream? Right now it is combining the results from the OCR into one which is incorrect.

Also, the same issue happens when I try to separate and save detections for each stream…
I would achieve this using pad_index but since it is one stream How can I achieve this?

The batch and metadata info will clean after tiler plugin, so when you want to get some info from the osd plugin, it maybe get wrong values.
About your needs, we suggest you run one stream at a time. Or you can try to use nvstreamdemux to separate the stream.
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvstreamdemux.html

1 Like

Thanks, I managed to accomplish it on the Kafka consumer side. I will look into nvstreamdemux.

Edit:
deepstream-demux-multi-in-multi-out.py shows how to use nvstreamdemux

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.