Deepstream-Test-1-usbcam with three V4L2 sources

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Orin AGX
• DeepStream Version
6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Run attached code with 2 or more sources
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I want to get the source iD from the the probe buffer. I dont get the frame number or the source ID.
I would assume it has to do with my pipeline but please assist. I get the image, and the box info as well as class, but frame_meta.frame_num does not produce anything, nor will I see the source ID in the per_print_callback
Any idea why?

Previous thread and ref. to how I set the pipeline.
main.zip (4.0 KB)

I think so too. Do I need to create three muxes and pgie for each source?
I am not sure if I create the pipeline correctly since I dont use the example source_decode_bins

#creating srcpads and sinkpads per camera source
    srcpads = []
    sinkpads = []
    for s in range(number_sources):
        srcpad = caps_vidconvsrcs[s].get_static_pad("src")    
        sinkpad = streammux.get_request_pad("sink_"+str(s))
        print("sink_number:"+str(s))
        srcpads.append(srcpad)
        sinkpads.append(sinkpad)
#Linking all the elements until streammux
    for i in range(number_sources):    
        sources[i].link(caps_v4l2srcs[i])
        caps_v4l2srcs[i].link(vidconvsrcs[i])
        vidconvsrcs[i].link(nvvidconvsrcs[i])
        nvvidconvsrcs[i].link(caps_vidconvsrcs[i])
        srcpads[i].link(sinkpads[i])

#Rest of the pipeline
    streammux.link(queue1)
    queue1.link(pgie)
    pgie.link(queue2) #pgie.link(tiler)
    queue2.link(tiler)
    tiler.link(queue3) #tiler.link(nvvidconv)    
    queue3.link(nvvidconv)
    nvvidconv.link(queue4) #nvvidconv.link(nvosd)
    queue4.link(nvosd)       
    nvosd.link(queue5) #nvosd.link(sink)
    queue5.link(transform)
    transform.link(sink)    

    # create an event loop and feed gstreamer bus mesages to it
    loop = GLib.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)

#This is where I add get the src pad from pgie to use when adding the probe.
    pgie_src_pad = pgie.get_static_pad("src")
    if not pgie_src_pad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")

#Adding the probe
    pgie_src_pad.add_probe(Gst.PadProbeType.BUFFER, pgie_src_pad_buffer_probe, 0)
    GLib.timeout_add(5000, perf_data.perf_print_callback)

Should I create three muxers and three pgie (one per source), merge them to the tiler element and add the probe to nvosd instead?

Please don’t get the frame_num afer the tiler plugin. The plugin will split the batch and set the frame_num to 0.

Hi,

Thanks. I follow the examples provided, deepstream-test3 and the tiler is where I have it?
The linking of pgie happens before tiler and the probing after nvosd, same as the examples?

Is this something to do with the fact that I am not using a playbin, cause otherwise I dont understand why this would not work…

In fact, @yuweiw has already given the correct answer.

As mentioned in the tilter documentation, NvDsBatchMeta will be transformed, so if you want to get the correct NvDsBatchMeta information, you need to place the probe before the tilter.

If your code is the same as this, then it should be able to obtain frame_num and stream_index correctly.

Thank you. I did get the answer from Yuweiw and I have the code (if you looked at the source attached).
The code is as you see. I could not get it to work, but finally now I have changed something that must have been wrong and I get the meta and image. I don’t know why all of a sudden I got the source data but it works. I also found that I was missing a color conversion in the caps filer which is also added to the code so that the image extraction works from the gst_buffer.

For others who want this including XML (Pascal/VOC) output please see attached code. Perhaps I bring value to someone else.

main.zip (4.0 KB)

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.