Osd_sink_pad_buffer_probe receiving frames every second instead of the time of frame

• Hardware Platform (dGPU)
• DeepStream Version 6.0
• TensorRT Version 8.0.1.6
• NVIDIA GPU Driver Version 470.103.01
• Issue Type( questions)

Our application is based on deepstream-imagedata-multistream example from deepstream python apps. The only major difference is introduction of drop-frame-interval to reduce the processing FPS. However, the below problem happens both at native FPS of the camera as well as when we introduce a drop frame interval.

While working on this we observed that frames that belong to single second are sent to osd_sink_pad_buffer_probe after completion of another second in a batch, instead of sending instantly.

Example:
When camera is running at 10 FPS, this is what we observed (in table). Expected OSD probe to be called 10 times in that second, instead of after completion of the second. It is observed that there is almost 0.8 to 1 sec difference between Frame 10 and 11 because they belong to two different batches.

Frame Time OSD probe calling time
Frame 1 12:00:00.0 12:00:01+
Frame 2 12:00:00.1 12:00:01+
Frame 3 12:00:00.2 12:00:01+
Frame 4 12:00:00.3 12:00:01+
Frame 5 12:00:00.4 12:00:01+
Frame 6 12:00:00.5 12:00:01+
Frame 7 12:00:00.6 12:00:01+
Frame 8 12:00:00.7 12:00:01+
Frame 9 12:00:00.8 12:00:01+
Frame 10 12:00:00.9 12:00:01+
Frame 11 12:00:01.0 12:00:02+
Frame 12 12:00:01.1 12:00:02+
Frame 13 12:00:01.2 12:00:02+
Frame 14 12:00:01.3 12:00:02+
Frame 15 12:00:01.4 12:00:02+
Frame 16 12:00:01.5 12:00:02+
Frame 17 12:00:01.6 12:00:02+
Frame 18 12:00:01.7 12:00:02+
Frame 19 12:00:01.8 12:00:02+
Frame 20 12:00:01.9 12:00:02+

Our osd_sink_pad_buffer_probe is very efficient - it finishes in 3-5ms. There is no time spent in this function.

Is there way to call osd probe after every frame?

Hi @user151731 , the whole pipeline is synchronous, so it may spend long time on some other plugins. Could you attach your code? How many sources do you use? What are the basic paras of the video(resolution, codec, etc…)?

Main pipeline creation code:
Drop frame rate is set to 3

    # Standard GStreamer initialization
    GObject.threads_init()
    Gst.init(None)

    # Create gstreamer elements
    pipeline = create_pipeline()

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = create_streammux()
    pipeline.add(streammux)

    streamdemux = create_streamdemux()

    for source_num in len(sources):
        source = sources[source_num]
        source_bin = create_source_bin(source_num, source, drop_frame_rate)
        if not source_bin:
            sys.stderr.write("Unable to create source bin \n")
        pipeline.add(source_bin)
        padname = "sink_%u" % source_num
        sinkpad = streammux.get_request_pad(padname)
        if not sinkpad:
            sys.stderr.write(f"Unable to create sink pad bin for source {source_num} \n")
        srcpad = source_bin.get_static_pad("src")
        if not srcpad:
            sys.stderr.write("Unable to create src pad bin \n")
        srcpad.link(sinkpad)

    pgie = create_pgie()

    # create tracker
    if config.use_nvtracker:
        tracker = create_tracker([source_num for source_num in len(sources)])


    # Use convertor to convert from NV12 to RGBA as required by nvosd
    nvvidconv = create_nvvidconv([source_num for source_num in len(sources)])

    # Create OSD to draw on the converted RGBA buffer.
    nvosd = create_nvosd([source_num for source_num in len(sources)])
    # nvvidconv_postosd = create_nvvidconv()

    """
    RTSP Out Code Start
    """
    # Create a caps filter
    caps = create_capsfilter([source_num for source_num in len(sources)])

    # Make the encoder
    encoder = create_encoder([source_num for source_num in len(sources)])

    # Make the payload-encode video into RTP packets
    rtppay = create_rtppay([source_num for source_num in len(sources)])

    # Make the UDP sink
    udpsink_start_port = 5400
    udpsink_port_list = [port for port in range(udpsink_start_port, udpsink_start_port+len([source_num for source_num in len(sources)]))]
    sink = create_udpsink([source_num for source_num in len(sources)], udpsink_port_list)

    """
    RTSP Out Code End
    """
    streammux = add_streammux_props(streammux, frame_width, frame_height)  # setting properties of streammux
    pgie.set_property('config-file-path', primary_model_config_file)  # setting properties of pgie

    pipeline.add(pgie)
    pipeline.add(streamdemux)
    
    if config.use_nvtracker:
        dum_var = [pipeline.add(v) for v in tracker.values()]
    dum_var = [pipeline.add(v) for v in nvvidconv.values()]
    dum_var = [pipeline.add(v) for v in nvosd.values()]
    dum_var = [pipeline.add(v) for v in caps.values()]
    dum_var = [pipeline.add(v) for v in encoder.values()]
    dum_var = [pipeline.add(v) for v in rtppay.values()]
    dum_var = [pipeline.add(v) for v in sink.values()]


    # Link the elements together:
    # uridecodebin -> streammux -> nvinfer ->
    # nvtracker -> nvvidconv -> nvosd -> nvvidconv_postosd ->
    # caps -> encoder -> rtppay -> udpsink


    streammux.link(pgie)
    pgie.link(streamdemux)

    for source_num in len(sources):
        source = sources[source_num]
        srcpad1 = streamdemux.get_request_pad(f"src_{source_num}")
        if not srcpad1:
            sys.stderr.write(" Unable to get the src pad of streamdemux \n")
            continue
        if config.use_nvtracker:
            sinkpad1 = tracker[f"tracker_{source_num}"].get_static_pad("sink")
            if not sinkpad1:
                sys.stderr.write(" Unable to get sink pad of tracker \n")
        else:
            sinkpad1 = nvvidconv[f"nvvidconv_{source_num}"].get_static_pad("sink")
            if not sinkpad1:
                sys.stderr.write(" Unable to get sink pad of nvvidconv \n")
        
        srcpad1.link(sinkpad1)
        #######################
        if config.use_nvtracker:
            tracker[f"tracker_{source_num}"].link(nvvidconv[f"nvvidconv_{source_num}"])
        nvvidconv[f"nvvidconv_{source_num}"].link(nvosd[f"nvosd_{source_num}"])
        nvosd[f"nvosd_{source_num}"].link(caps[f"caps_{source_num}"])
        caps[f"caps_{source_num}"].link(encoder[f"encoder_{source_num}"])
        encoder[f"encoder_{source_num}"].link(rtppay[f"rtppay_{source_num}"])
        rtppay[f"rtppay_{source_num}"].link(sink[f"sink_{source_num}"])


    # create an event loop and feed gstreamer bus mesages to it
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", bus_call, loop)


    # Start streaming
    rtsp_port_num = 8554

    server = GstRtspServer.RTSPServer.new()
    server.props.service = "%d" % rtsp_port_num
    server.attach(None)

    factory_dict = dict()
    for i in range(len(sources)):
        udpsink_port = udpsink_port_list[i]
        factory = GstRtspServer.RTSPMediaFactory.new()
        factory.set_launch(
            "( udpsrc name=pay0 port=%d buffer-size=524288 caps=\"application/x-rtp, media=video, clock-rate=90000, encoding-name=(string)%s, payload=96 \" )" % (
                udpsink_port, codec))
        factory.set_shared(True)
        factory_dict[f"factory_{i}"] = factory
    for source_num in len(sources):
        server.get_mount_points().add_factory("/ds-test", factory_dict[f"factory_{source_num}"])

    osdsinkpad = dict()
    for source_num in len(sources)
        osdsinkpad[f"osdsinkpad_{source_num}"] = nvosd[f"nvosd_{source_num}"].get_static_pad("sink")
        if not osdsinkpad[f"osdsinkpad_{source_num}"]:
            sys.stderr.write(" Unable to get sink pad of nvosd \n")

    for osp, source in zip(osdsinkpad.values(), sources):
        osp.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0, source)

    # start play back and listen to events
    print("Starting pipeline \n", pipeline)
    pipeline.set_state(Gst.State.PLAYING)

    loop.run()

Creating source:

def create_source_bin(index, uri, drop_frame_rate):
    # Create a source GstBin to abstract this bin's content from the rest of the pipeline
    bin_name = "source-bin-%02d" % index
    nbin = Gst.Bin.new(bin_name)
    if not nbin:
        sys.stderr.write(" Unable to create source bin \n")

    uri_decode_bin = Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
    if not uri_decode_bin:
        sys.stderr.write(" Unable to create uri decode bin \n")

    uri_decode_bin.set_property("uri", uri)
    uri_decode_bin.connect("pad-added", cb_newpad, nbin)
    uri_decode_bin.connect("child-added", decodebin_child_added, nbin, drop_frame_rate)

    Gst.Bin.add(nbin, uri_decode_bin)
    bin_pad = nbin.add_pad(Gst.GhostPad.new_no_target("src", Gst.PadDirection.SRC))
    if not bin_pad:
        sys.stderr.write(" Failed to add ghost pad in source bin \n")
        return None
    return nbin

cb_newpad & decodebin_child_added:

def cb_newpad(decodebin, decoder_src_pad, data):
    print("In cb_newpad\n")
    caps = decoder_src_pad.get_current_caps()
    gststruct = caps.get_structure(0)
    gstname = gststruct.get_name()
    source_bin = data
    features = caps.get_features(0)

    if (gstname.find("video") != -1):
        if features.contains("memory:NVMM"):
            bin_ghost_pad = source_bin.get_static_pad("src")
            if not bin_ghost_pad.set_target(decoder_src_pad):
                sys.stderr.write("Failed to link decoder src pad to source bin ghost pad\n")
        else:
            sys.stderr.write(" Error: Decodebin did not pick nvidia decoder plugin.\n")


def decodebin_child_added(child_proxy, Object, name, drop_frame_rate, user_data):
    print("Decodebin child added:", name)
    if(name.find("decodebin") != -1):
        Object.connect("child-added", decodebin_child_added, user_data, drop_frame_rate)
    if(name.find("nvv4l2decoder") != -1):
        print("Seting bufapi_version\n")
        Object.set_property("drop-frame-interval", drop_frame_rate)

OSD Probe function is similar to below code:

def osd_sink_pad_buffer_probe(pad,info,u_data):
    frame_number=0
    #Intiallizing object counter with 0.
    obj_counter = {
        PGIE_CLASS_ID_VEHICLE:0,
        PGIE_CLASS_ID_PERSON:0,
        PGIE_CLASS_ID_BICYCLE:0,
        PGIE_CLASS_ID_ROADSIGN:0
    }
    num_rects=0

    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return

    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)
    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.NvDsFrameMeta.cast()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        frame_number=frame_meta.frame_num
        num_rects = frame_meta.num_obj_meta
        l_obj=frame_meta.obj_meta_list
        while l_obj is not None:
            try:
                # Casting l_obj.data to pyds.NvDsObjectMeta
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break
            obj_counter[obj_meta.class_id] += 1
            try: 
                l_obj=l_obj.next
            except StopIteration:
                break

        # Acquiring a display meta object. The memory ownership remains in
        # the C code so downstream plugins can still access it. Otherwise
        # the garbage collector will claim it when this probe function exits.
        display_meta=pyds.nvds_acquire_display_meta_from_pool(batch_meta)
        display_meta.num_labels = 1
        py_nvosd_text_params = display_meta.text_params[0]
        # Setting display text to be shown on screen
        # Note that the pyds module allocates a buffer for the string, and the
        # memory will not be claimed by the garbage collector.
        # Reading the display_text field here will return the C address of the
        # allocated string. Use pyds.get_string() to get the string content.
        py_nvosd_text_params.display_text = "Frame Number={} Number of Objects={} Vehicle_count={} Person_count={}".format(frame_number, num_rects, obj_counter[PGIE_CLASS_ID_VEHICLE], obj_counter[PGIE_CLASS_ID_PERSON])

        # Now set the offsets where the string should appear
        py_nvosd_text_params.x_offset = 10
        py_nvosd_text_params.y_offset = 12

        # Font , font-color and font-size
        py_nvosd_text_params.font_params.font_name = "Serif"
        py_nvosd_text_params.font_params.font_size = 10
        # set(red, green, blue, alpha); set to White
        py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

        # Text background color
        py_nvosd_text_params.set_bg_clr = 1
        # set(red, green, blue, alpha); set to Black
        py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
        # Using pyds.get_string() to get display_text as string
        print(pyds.get_string(py_nvosd_text_params.display_text))
        pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)
        try:
            l_frame=l_frame.next
        except StopIteration:
            break
            
    return Gst.PadProbeReturn.OK	

We tested with multiple number of sources (1 to 3), results are same.
below are the sources and corresponding output from our app.
Source specifications:

  1. Stream #0:0: Video: h264 (High), yuv420p(tv, progressive), 960x576, 15 fps, 25 tbr, 90k tbn, 30 tbc

below is the output for above mentioned stream, frame number, frame time and diff between prev frame time and current frametime

Frame nubmer Current Frame Time Time diff
Frame nubmer 06 1659421246 0.0041
Frame nubmer 07 1659421246 0.0036
Frame nubmer 08 1659421246 0.0033
Frame nubmer 09 1659421246 0.0034
Frame number 10 1659421247 1.0872
Frame number 11 1659421247 0.0038
Frame number 12 1659421247 0.0032
Frame number 13 1659421247 0.0031
Frame number 14 1659421247 0.0029
Frame number 15 1659421248 0.9158
Frame number 16 1659421248 0.0037
Frame number 17 1659421248 0.0034
Frame number 18 1659421248 0.0032
Frame number 19 1659421248 0.0031
Frame number 20 1659421249 0.9926
Frame number 21 1659421249 0.0038
Frame number 22 1659421249 0.0033
Frame number 23 1659421249 0.0031
Frame number 24 1659421249 0.003
Frame number 25 1659421250 1.0456
Frame number 26 1659421250 0.0037
Frame number 27 1659421250 0.0033
Frame number 28 1659421250 0.0033
Frame number 29 1659421250 0.0031
Frame number 30 1659421251 0.9532
  1. Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 1920x1080, 15 fps, 25 tbr, 90k tbn, 30 tbc
Frame nubmer Current Frame Time Time diff
Frame nubmer 06 1659423201 0.161
Frame nubmer 07 1659423201 0.2312
Frame nubmer 08 1659423201 0.2091
Frame nubmer 09 1659423202 0.1907
Frame number 10 1659423202 0.2018
Frame number 11 1659423202 0.178
Frame number 12 1659423202 0.2221
Frame number 13 1659423202 0.2015
Frame number 14 1659423203 0.1973
Frame number 15 1659423203 0.1993
Frame number 16 1659423203 0.1589
Frame number 17 1659423203 0.2415
Frame number 18 1659423203 0.2004
Frame number 19 1659423204 0.2078
Frame number 20 1659423204 0.1945
Frame number 21 1659423204 0.1359
Frame number 22 1659423204 0.2622
Frame number 23 1659423204 0.2009
Frame number 24 1659423205 0.1971
Frame number 25 1659423205 0.2031
Frame number 26 1659423205 0.1615
Frame number 27 1659423205 0.2452
Frame number 28 1659423205 0.1944
Frame number 29 1659423206 0.1978
Frame number 30 1659423206 0.1991

With the first stream there is huge time diff every 5 frames, which is not present when second stream is used

Hi @yuweiw ,

check this out → Osd_sink_pad_buffer_probe receiving frames every second instead of the time of frame - #4 by user151731

Hi @user151731
1.Could you provide us the streams?
2. Also, could you try to change the rtsp sink to fake sink?
3.Is there still the same problem when you use the deepstream-imagedata-multistream demo without any change?

Hi @yuweiw ,

Provided streams are from the customer location, if we can setup a session I can share the streams else we do not have permission from customer to share the stream

I’m closing this topic due to there is no update from you for a period, assuming this issue was resolved.
If still need the support, please open a new topic. Thanks

Ok, Then you can try the other 2 method I said.
Also, what do you mean by " if we can setup a session"? Does it count when you send me via the message(click my id, message me)?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.