Incorrect frame order when reading an RTSP stream

• Hardware Platform (GPU)
NVIDIA GeForce GTX 1070
• *** Deep Stream Version
6.3
**• NVIDIA GPU Driver Version **
530.30.02

Task:
I have several cameras to which I connect via RTSP, it is necessary to run a detection and tracking model on them and display it on the screen. I run everything in a container that I launched on a remote server that I connect to via SSH. Since there is no monitor connected to the server, I use X11 and forward the display to the local machine (this part works).

What am I doing:
I’m using an example from the deepstream-test3 repository. I run it as follows

python 3 deepstream_test_3.py -i rtsp://bla bla bla

The code works fine, I get a video on the screen, but the problem is that the frames in this video are mixed up. They jump in time. That is, there is time on the frame and it shows that the frames are mixed. For example a frame 09:12:30, 09:12:31, 09:12:32, 09:12:30, 09:12:33, 09:12:31, 09:12:34

At startup, it creates
sink = Gst.ElementFactory.make(“nveglglessink”, “nvvideo-renderer”)

I tried to replace nveglglessink with xvimagesink and get the error as in the screenshot.

How to solve the task, what am I missing? What is the problem with time jumps?


I attach a screenshot with the error I wrote about

Please make sure the software dependencies are correct. Quickstart Guide — DeepStream 6.3 Release documentation

I used a ready-made devel container. and the source code runs fine, but the frames are shuffled. how do the wrong dependencies affect the wrong frame order?

DeepStreamSDK depends on many different modules and drivers, only the compatible dependencies are verified.

I checked the dependencies, didn’t notice anything incorrect

there was a suggestion that the problem is in uridecodebin and so I decided to use rtspsrc. To do this, a pipeline was written on gst-launch. As follows:

gst-launch-1.0 rtspsrc location=rtsp://blabla latency=100 ! queue ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! videoscale ! video/x-raw, width=640, height=480 ! ximagesink.

This pipeline is working correctly. After that I want to write the same pipeline but in python. for example, I write
pipeline = Gst.parse_launch(“rtspsrc location=rtsp://blabla latency=100 ! queue ! rtph264depay ! h264parse ! avdec_h264 ! videoconvert ! videoscale ! video/x-raw, width=640, height=480 ! ximagesink”)… state.PLAYING…loop.run

I get warnings
udpsrc gstudpsrc.c:1445:gst_udpsrc_open: warning: Could not create a buffer of requested 524288 bytes (Operation not permitted) Need net.admin privilege?

And also

"udpsrc gstudpsrc.c:1445:gst_udpsrc_open: have udp buffer of 212992 bytes while 524288 were requested" .

But despite the warnings, I manage to connect to the rtsp server and display the video on the screen without mixing frames (although there are some lags).

But if I rewrite the same code but using ElementFactory as follows:

import gi
gi.require_version(‘Gst’, ‘1.0’)
from gi.repository import GLib, Gst

Gst.init(None)

source = Gst.ElementFactory.make(“rtspsrc”, “source”)
source.set_property(‘location’, ‘rtsp://’)
source.set_property(‘latency’, 100)

queue = Gst.ElementFactory.make(“queue”, “queue”)

depay = Gst.ElementFactory.make(“rtph264depay”, “depay”)

parse = Gst.ElementFactory.make(“h264parse”, “parse”)

dec = Gst.ElementFactory.make(“avdec_h264”, “dec”)

conv = Gst.ElementFactory.make(“videoconvert”, “conv”)

scale = Gst.ElementFactory.make(“videoscale”, “scale”)

caps = Gst.Caps.from_string(“video/x-raw,width=640,height=480”)
filter = Gst.ElementFactory.make(“capsfilter”, “filter”)
filter.set_property(“caps”, caps)

sink = Gst.ElementFactory.make(“ximagesink”, “sink”)

pipeline = Gst.Pipeline.new(“my-pipeline”)

for elem in [source, queue, depay, parse, dec, conv, scale, filter, sink]:
pipeline.add(elem)

source.link(queue)
queue.link(depay)
depay.link(parse)
parse.link(dec)
dec.link(conv)
conv.link(scale)
scale.link(filter)
filter.link(sink)

loop = GLib.MainLoop()
pipeline.set_state(Gst.State.PLAYING)
try:
loop.run()
except:
pass

and I get even more warnings and the video does not appear on the screen, what am I doing wrong?

0:00:03.610392829 1684941 0x1a171e0 WARN udpsrc gstudpsrc.c:1445:gst_udpsrc_open: warning: Could not create a buffer of requested 524288 bytes (Operation not permitted). Need net.admin privilege?
0:00:03.610453051 1684941 0x1a171e0 WARN udpsrc gstudpsrc.c:1455:gst_udpsrc_open: have udp buffer of 212992 bytes while 524288 were requested
0:00:05.591160253 1684941 0x7fbb1801e300 WARN basesrc gstbasesrc.c:3072:gst_base_src_loop: error: Internal data stream error.
0:00:05.591176931 1684941 0x7fbb1801e300 WARN basesrc gstbasesrc.c:3072:gst_base_src_loop: error: streaming stopped, reason not-linked (-1)
0:00:32.064079570 1684941 0x7fbb1801e2a0 WARN rtspsrc gstrtspsrc.c:3458:on_timeout_common: source 619308f6, stream 619308f6 in session 0 timed out

Seems there is something wrong with your RTSP source

maybe, but then why can I connect via gst-launch without any problems?

I made some progress and was able to write a pipeline that starts and displays the video stream in the correct order (not mixed frames). But when I try to add pgie for detection I get warnings and errors.

import gi
gi.require_version(‘Gst’, ‘1.0’)
from gi.repository import Gst, GLib

def on_pad_added(src, pad, elements):
sink_pad = elements.get_static_pad(“sink”)
pad.link(sink_pad)

Gst.init(None)
pipeline = Gst.Pipeline.new(“my-pipeline”)

source = Gst.ElementFactory.make(“rtspsrc”, “source”)
source.set_property(‘location’, ‘rtsp://ED’)
source.set_property(‘latency’, 100)

queue1 = Gst.ElementFactory.make(“queue”, “queue1”)
depay = Gst.ElementFactory.make(“rtph264depay”, “depay”)
queue2 = Gst.ElementFactory.make(“queue”, “queue2”)
parse = Gst.ElementFactory.make(“h264parse”, “parse”)
queue3 = Gst.ElementFactory.make(“queue”, “queue3”)
dec = Gst.ElementFactory.make(“avdec_h264”, “dec”)
queue4 = Gst.ElementFactory.make(“queue”, “queue4”)
conv = Gst.ElementFactory.make(“nvvideoconvert”, “conv”)
queue5 = Gst.ElementFactory.make(“queue”, “queue5”)
scale = Gst.ElementFactory.make(“videoscale”, “scale”)
caps = Gst.Caps.from_string(“video/x-raw,width=640,height=480”)
filter = Gst.ElementFactory.make(“capsfilter”, “filter”)
filter.set_property(“caps”, caps)
pgie = Gst.ElementFactory.make(“nvinfer”, “primary-inference”)
pgie.set_property(‘config-file-path’, ‘/opt/nvidia/deepstream/deepstream-6.3/sources/DeepStream-Yolo/config_infer_primary_yoloV8.txt’)
sink = Gst.ElementFactory.make(“ximagesink”, “sink”)

for elem in [source, queue1, depay, queue2, parse, queue3, dec, queue4, conv, queue5, scale, filter, pgie, sink]:
pipeline.add(elem)

source.connect(“pad-added”, on_pad_added, queue1)
queue1.link(depay)
depay.link(queue2)
queue2.link(parse)
parse.link(queue3)
queue3.link(dec)
dec.link(queue4)
queue4.link(conv)
conv.link(queue5)
queue5.link(pgie)
pgie.link(scale)
scale.link(filter)
filter.link(sink)

loop = GLib.MainLoop()
pipeline.set_state(Gst.State.PLAYING)
try:
loop.run()
except:
pass

I also want to note that when I remove pgie from the pipeline, warnings remain, but there is no error and the code as a whole runs.

I also wanted to ask if there are ready-made pipelines using rtspsrc and not with decodebin. And what is the reason for the incorrect processing of the rtsp stream using decodebin in general

logs.txt (115.0 KB)

Please use hardware decoder. Gst-nvvideo4linux2 — DeepStream 6.2 Release documentation

There is python samples for the correct pipeline. deepstream_python_apps/apps/deepstream-test1 at master · NVIDIA-AI-IOT/deepstream_python_apps (github.com)

I got the desired result, but I would like to ask more. Does Deepstream support multicast?

The network protocol is not implemented inside DeepStream. The open source GStreamer rtspsrc supports multicast rtspsrc (gstreamer.freedesktop.org)