High letency when use udpsrc

Please provide complete information as applicable to your setup.

• Hardware Platform: Jetson Nano
• DeepStream Version: 6.0.0
• JetPack Version: 4.6.1
• TensorRT Version: 8.2.1.8
• Issue Type: questions

Hi folks,
I have some problem but I want to expand my question to more general.
I run two codes (yolo+tracker deepstream with python bindings) they have exactly the same pipeline except that the first element in the pipeline is file-source that load file.h264 and the second is udpsrc I play some Pcap (h264 videp) from my computer.
Unfortunately when I run my code with udpsrc I have very high latency and I get delay on the screen and some video artifacts.
My goal is to be able get h264 udp stream and run yolov5+tracker so I will be happy to any advice how I can improve my performance . I am new with deepstream and I hope you can help me improve my code and result I not really understand why I get different between file src and udp src.

def main(args):
    # Check input arguments
    args = ['python3 ArielTracker.py', '/opt/nvidia/deepstream/deepstream/samples/streams/gvulot_2.h264']
    if (len(args) < 2):
        sys.stderr.write("usage: %s <h264_elementary_stream> [0/1]\n" % args[0])
        sys.exit(1)

    for i in range(0, len(args) - 1):
        fps_streams["stream{0}".format(i)] = GETFPS(i)
    number_sources = len(args) - 1

    # Standard GStreamer initialization
    if (len(args) == 3):
        past_tracking_meta[0] = int(args[2])
    Gst.init(None)

    # Create gstreamer elements
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")

    # Source element for reading from the file
    print("Creating Source \n ")
    # source = Gst.ElementFactory.make("filesrc", "file-source")
    source = Gst.ElementFactory.make("udpsrc", "UDP-source")
    source.set_property("port", PORT)
    source.set_property("multicast-group", MULTI_GROUP)

    if not source:
        sys.stderr.write(" Unable to create Source \n")

    # Since the data format in the input file is elementary h264 stream,
    # we need a h264parser
    print("Creating H264Parser \n")
    h264parser = Gst.ElementFactory.make("h264parse", "h264-parser")
    # h264parser = Gst.ElementFactory.make("mpeg4videoparse", "mpeg4-parser")
    if not h264parser:
        sys.stderr.write(" Unable to create h264 parser \n")

    # Use nvdec_h264 for hardware accelerated decode on GPU
    print("Creating Decoder \n")
    decoder = Gst.ElementFactory.make("nvv4l2decoder", "nvv4l2-decoder")
    decoder.set_property('enable-max-performance', 1)
    if not decoder:
        sys.stderr.write(" Unable to create Nvv4l2 Decoder \n")

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    # Use nvinfer to run inferencing on decoder's output,
    # behaviour of inferencing is set through config file
    pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    if not pgie:
        sys.stderr.write(" Unable to create pgie \n")

    tracker = Gst.ElementFactory.make("nvtracker", "tracker")
    if not tracker:
        sys.stderr.write(" Unable to create tracker \n")

    sgie1 = Gst.ElementFactory.make("nvinfer", "secondary1-nvinference-engine")
    if not sgie1:
        sys.stderr.write(" Unable to make sgie1 \n")

    sgie2 = Gst.ElementFactory.make("nvinfer", "secondary2-nvinference-engine")
    if not sgie2:
        sys.stderr.write(" Unable to make sgie2 \n")

    sgie3 = Gst.ElementFactory.make("nvinfer", "secondary3-nvinference-engine")
    if not sgie3:
        sys.stderr.write(" Unable to make sgie3 \n")

    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")

    # Create OSD to draw on the converted RGBA buffer
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")

    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")

    # Finally render the osd output
    if is_aarch64():
        # transform = Gst.ElementFactory.make("nvegltransform", "nvegl-transform")
        transform = Gst.ElementFactory.make("queue", "queue")

    print("Creating EGLSink \n")
    # sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
    sink = Gst.ElementFactory.make("nvoverlaysink", "nvvideo-renderer")
    sink.set_property('sync', 0)
    if not sink:
        sys.stderr.write(" Unable to create egl sink \n")

    print("Playing file %s " % args[1])
    # source.set_property('location', args[1])
    streammux.set_property('width', 640)
    streammux.set_property('height', 640)
    streammux.set_property('batch-size', 1)
    streammux.set_property('batched-push-timeout', 4000000)

    # Set properties of pgie
    pgie.set_property('config-file-path', "config_infer_primary_yoloV5.txt")
    # pgie.set_property('config-file-path', "config_infer_primary_yoloV7.txt")

    # Set properties of tracker
    config = configparser.ConfigParser()
    config.read('tracker_config.txt')
    config.sections()

    for key in config['tracker']:
        if key == 'tracker-width':
            tracker_width = config.getint('tracker', key)
            tracker.set_property('tracker-width', tracker_width)
        if key == 'tracker-height':
            tracker_height = config.getint('tracker', key)
            tracker.set_property('tracker-height', tracker_height)
        if key == 'gpu-id':
            tracker_gpu_id = config.getint('tracker', key)
            tracker.set_property('gpu_id', tracker_gpu_id)
        if key == 'll-lib-file':
            tracker_ll_lib_file = config.get('tracker', key)
            tracker.set_property('ll-lib-file', tracker_ll_lib_file)
        if key == 'll-config-file':
            tracker_ll_config_file = config.get('tracker', key)
            tracker.set_property('ll-config-file', tracker_ll_config_file)
        if key == 'enable-batch-process':
            tracker_enable_batch_process = config.getint('tracker', key)
            tracker.set_property('enable_batch_process', tracker_enable_batch_process)
        if key == 'enable-past-frame':
            tracker_enable_past_frame = config.getint('tracker', key)
            tracker.set_property('enable_past_frame', tracker_enable_past_frame)
        if key == 'compute-hw':
            tracker_enable_compute_hw = config.getint('tracker', key)
            tracker.set_property('compute-hw', tracker_enable_compute_hw)


    print("Adding elements to Pipeline \n")
    pipeline.add(source)
    pipeline.add(h264parser)
    pipeline.add(decoder)
    pipeline.add(streammux)
    pipeline.add(pgie)
    pipeline.add(tracker)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(sink)
    if is_aarch64():
        pipeline.add(transform)

    # we link the elements together
    # file-source -> h264-parser -> nvh264-decoder ->
    # nvinfer -> nvvidconv -> nvosd -> video-renderer
    print("Linking elements in the Pipeline \n")
    source.link(h264parser)
    h264parser.link(decoder)

    sinkpad = streammux.get_request_pad("sink_0")
    if not sinkpad:
        sys.stderr.write(" Unable to get the sink pad of streammux \n")
    srcpad = decoder.get_static_pad("src")
    if not srcpad:
        sys.stderr.write(" Unable to get source pad of decoder \n")
    srcpad.link(sinkpad)
    streammux.link(pgie)
    pgie.link(tracker)
    tracker.link(nvvidconv)
    nvvidconv.link(nvosd)
    if is_aarch64():
        nvosd.link(transform)
        transform.link(sink)
    else:
        nvosd.link(sink)

    # create and event loop and feed gstreamer bus mesages to it
    loop = GLib.MainLoop()

    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect("message", bus_call, loop)

    # Lets add probe to get informed of the meta data generated, we add probe to
    # the sink pad of the osd element, since by that time, the buffer would have
    # had got all the metadata.
    osdsinkpad = nvosd.get_static_pad("sink")
    if not osdsinkpad:
        sys.stderr.write(" Unable to get sink pad of nvosd \n")
    osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, osd_sink_pad_buffer_probe, 0)

    print("Starting pipeline \n")

    # start play back and listed to events
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass

    # cleanup
    pipeline.set_state(Gst.State.NULL)


if __name__ == '__main__':
    sys.exit(main(sys.argv))

config_infer_primary_yoloV5.txt (729 Bytes)

from your description, the only one difference is video source, if the raw data fps is the same, the output should be the same.
dose they have the same h264 encoded parameters? you can add probe function on nvv4l2decoder to check if the fps is the same.

Hi thanks for the reply, I find a solution, but unfortunately, I don’t know if is the best option I set the property interval (Gst-nvinfer) to 2 and I get the same result (I think it looks like the same).
I want to ask you if the interval property throws frames or if he just makes passing through and I display to the screen the same frame rate.
In general, I will be glad to understand how can I optimize my pipeline. (I will be happy for any advice).
I will check about the h264 parameters, but when I run the pipeline:

gst-launch-1.0 udpsrc port=XXXXX multicast-group=XXXX ! h264parse ! nvv4l2decoder max-performance=1 ! autovideosink

I get a very good result so I am not sure if the problem is in the decoding part.
Thank you for your help

nvinfer 's interval represents “Specifies the number of consecutive batches to be skipped for inference”, please find it in Gst-nvinfer — DeepStream 6.1.1 Release documentation, if set it to 2, nvinfer will do one inference every 3 batch.
2. dose deepstream native sample deepstream-test2 have lentency issue?
3. if “gst-launch-1.0 udpsrc port=XXXXX multicast-group=XXXX ! h264parse ! nvv4l2decoder max-performance=1 ! autovideosink” is good, you can then add nstream and nvinfer step by step.

I add to my nvstreammux and then nvinfer I see the same latency and I get this warning message:

gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay:
There may be a timestamping problem, or this computer is too slow.
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay:
There may be a timestamping problem, or this computer is too slow.
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay:
There may be a timestamping problem, or this computer is too slow.
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay:
There may be a timestamping problem, or this computer is too slow.
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay:
There may be a timestamping problem, or this computer is too slow.
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay:
There may be a timestamping problem, or this computer is too slow.
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay:
There may be a timestamping problem, or this computer is too slow.
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay:
There may be a timestamping problem, or this computer is too slow.
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay:
There may be a timestamping problem, or this computer is too slow.
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay:
There may be a timestamping problem, or this computer is too slow.
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay:
There may be a timestamping problem, or this computer is too slow.
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay:
There may be a timestamping problem, or this computer is too slow.
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay:
There may be a timestamping problem, or this computer is too slow.
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay:
There may be a timestamping problem, or this computer is too slow.

when I change in the config file interval=2, I don’t have latency and everything run ok.
I want to ask you another question if I have only one video source why I need to use the streammux element I not add to my pipeline delay maybe I can remove this element ?

I add the graph of the pipeline:


nvinfer needs batched hardware buffer, which is generated by nvstreamux, please refer to Gst-nvinfer — DeepStream 6.3 Release documentation

It’s look like for some reason the delay is when I try to render the result on the screen

Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay:
There may be a timestamping problem, or this computer is too slow.
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0/GstNvOverlaySink-nvoverlaysink:autovideosink0-actual-sink-nvoverlay: A lot of buffers are being dropped.
Additional debug info:```

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

can you use fakesink or filesink or rtspstreaming instead of nveglglessink?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.