I built the pipeline and the delay in loading rtsp is very noticeable, why is this?

  • deepstream-app version 6.1.0
  • DeepStreamSDK 6.1.0
  • CUDA Driver Version: 11.4
  • CUDA Runtime Version: 11.0
  • TensorRT Version: 8.2
  • cuDNN Version: 8.4
  • libNVWarp360 Version: 2.0.1d3
  • Device: A6000

1,I built the pipeline and the delay in loading rtsp is very noticeable, why is this?
2,Here is the pipeline I built, is the pipeline too long and needs to be optimized? If so, how should I optimize it?
3,Or is there some other reason?

4,In the location of nvosd in the pipeline, I added a callback function for loading eventMsgMeta data.

    osd_sink_pad = gst_element_get_static_pad (nvosd, "sink");
    if (!osd_sink_pad)
        g_print ("Unable to get sink pad\n");
    else {
        if (msg2p_meta == 0)        //generate payload using eventMsgMeta
            gst_pad_add_probe (osd_sink_pad, GST_PAD_PROBE_TYPE_BUFFER,
                               osd_sink_pad_buffer_probe, NULL, NULL);
    }
    gst_object_unref (osd_sink_pad);

What do you mean loading rtsp is very noticeable? You can try to use uridecoderbin as your source.

Yes, the uridecoder bin is used in source_bin .
The rtsp webcam is used as the input stream, passing through the whole pipeline and finally outputting the inferred video with high latency.
Is my pipeline too long? How to optimize it?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one.
Thanks

There may be server reasons for yourself to check.
1.Is it related to your network?
2.Did you add some time comsuming work in any of the probe function?
3.You can try to delete tee plugin in your pipeline to verify
which one will affect the delay, msgcov or video-render.