RTSP stream display delay, with Deepstream 5.0 and uridecodebin Gstreamer element

Please provide complete information as applicable to your setup.

• Hardware Platform: Jetson Nano Developer Kit
• DeepStream Version: 5.0
• JetPack Version: 4.4
• TensorRT Version: 7.1.0

Problem description:
I am now using an IP camera for object detection with Deepstream Python binding. So, the input of the camera system is rtsp streaming. I followed the example code named “deepstream-test3”, where the input is video file, or rtsp video stream ip address. (Corresponding codes are followed: Gst.ElementFactory.make(“uridecodebin”,“uri-decode-bin”)).
The program runs. However, it is seen that video shown on the screen has 3s delay.
Sink group I use is nvoverlaysink, since I want to show the video results on the screen. (B.T.W. I cannot find out nvoverlaysink or nveglglessink properties explanation.)

Analysis:
I guess the problem is from “uridecodebin”, and I set the variable “buffer-duration” to 1, and “buffer-size” as 1. But delay is still there.

Question:
Anyone has any idea why delay happened and how the problem can be settled. Thank you so much!

Here follows the codes related:

def create_source_bin(index,uri):

bin_name="source-bin-%02d" %index
print(bin_name)
nbin=Gst.Bin.new(bin_name)
if not nbin:
    sys.stderr.write(" Unable to create source bin.")

uri_decode_bin=Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")    # Source element for reading from the uri.
if not uri_decode_bin:                                                      # We will use decodebin and let it figure out the container format of the
    sys.stderr.write(" Unable to create uri decode bin.")                   # stream and the codec and plug the appropriate demux and decode plugins.

uri_decode_bin.set_property("uri",uri)                                      # We set the input uri to the source element.
uri_decode_bin.set_property("buffer-duration",1) 
uri_decode_bin.set_property("buffer-size",1) 
uri_decode_bin.connect("pad-added",cb_newpad,nbin)                          # Connect to the "pad-added" signal of the decodebin which generates a callback once a new pad for raw data has beed created by the decodebin.
uri_decode_bin.connect("child-added",decodebin_child_added,nbin)

# We need to create a ghost pad for the source bin which will act as a proxy for the video decoder src pad. The ghost pad will not have a target right
# now. Once the decode bin creates the video decoder and generates the cb_newpad callback, we will set the ghost pad target to the video decoder src pad.
Gst.Bin.add(nbin,uri_decode_bin)
bin_pad=nbin.add_pad(Gst.GhostPad.new_no_target("src",Gst.PadDirection.SRC))
if not bin_pad:
    sys.stderr.write(" Failed to add ghost pad in source bin.")
    return None
return nbin

Here follows the codes in main file:

streammux = Gst.ElementFactory.make(“nvstreammux”, “Stream-muxer”)

if not streammux:
    sys.stderr.write(" Unable to create NvStreamMux \n")
step_idx += 1
pipeline.add(streammux)

for i in range(number_sources):
    if not path.exists(imagesave_folder+"/stream_"+str(i)):             # Create directory of screenshot image (whcih contains unsafe behavior)
        os.mkdir(imagesave_folder+"/stream_"+str(i))
    saved_count["stream_"+str(i)]=0                                     # Initialization of "saved_count".
    print("Creating source_bin ",i)
    uri_name=video_directory[i]
    if uri_name.find("rtsp://") == 0 :                                  # Check if input uri is in format rtsp://[ip address followed]. If it is (return 0), then input video stream is live video. If it is not (return -1), then input is video file normally.
        is_live = True
    source_bin=create_source_bin(i, uri_name)
    if not source_bin:
        sys.stderr.write("Unable to create source bin \n")
    print("Step%d: Creating source%d." %(step_idx,i) )
    step_idx += 1
    pipeline.add(source_bin)
    padname="sink_%u" %i
    sinkpad= streammux.get_request_pad(padname) 
    if not sinkpad:
        sys.stderr.write("Unable to create sink pad bin \n")
    srcpad=source_bin.get_static_pad("src")
    if not srcpad:
        sys.stderr.write("Unable to create src pad bin \n")
    srcpad.link(sinkpad)                                                # (Yichao comment) I don't know why source pad links to sink pad. But it works.

print("Step%d: Setting up nvinfer to run inference on the decoders output." %step_idx)
pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")          # Object detection model.
if not pgie:
    sys.stderr.write(" Unable to create pgie \n")
step_idx += 1

print("Step%d: Setting up tracker for object tracking." %step_idx)
tracker = Gst.ElementFactory.make("nvtracker", "tracker")               # Object tracking model.
if not tracker:
    sys.stderr.write(" Unable to create tracker \n")
step_idx += 1

Hi,
For running deepstream-test3 on Jetson Nano, you would need customization. Please refer to

The same modifcation should work in python code. Please give it a try.

1 Like

Thanks a lot for your help. You comments settled the problem. One small question after using “nvoverlaysink”: After I run my program, I see the video after processed shown on my screen successfully, without delay. However, “nvoverlaysink” gives me a full screen display. And I cannot switch to my terminal during the code running. Do you have any idea how to tune the screen size of the video display? I guess something like: "
g_object_set (G_OBJECT(sink), “sync”, FALSE, NULL);", but I cannot find out the command to tune the screen size.

Hi,
Please configure the property of nvoverlaysink:

  overlay-x           : Overlay X coordinate
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 4294967295 Default: 0
  overlay-y           : Overlay Y coordinate
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 4294967295 Default: 0
  overlay-w           : Overlay Width
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 4294967295 Default: 0
  overlay-h           : Overlay Height
                        flags: readable, writable
                        Unsigned Integer. Range: 0 - 4294967295 Default: 0

1 Like

Thanks a lot. I can change the local display screen size now. But when the video is opened, I cannot find any buttons (maximize, minimize, close) in the upper left corner, and I also cannot move the screen. That’s quite strange. Can you help for this small problem? Thanks a lot for your help.

Hi,
The nvoverlaysink plugin is not window-based sink. If you need the window operations, please use nveglglessink sync=false.

I see. I tried nveglglessink with sync=false but gives me 3 sec delay. I will continue using nvoverlaysink with a smaller screen size. Thanks a lot for your help. You helped me quite a lot. Thank you.