Deepstream 5.1 rtsp and nvinfer issues

Please provide complete information as applicable to your setup.

• Jetson / GPU
• DeepStream Version 5.1
• JetPack Version 4.5.1
**• TensorRT Version 7.1.3.0 **
**• Issue Type: questions **

I’m running a simple pipeline that reads 3 streams (30 FPS each) from rtsp camera and rins nvinfer on it (yolo, object detection)

The pipeline is the following

gst-launch-1.0 \
nvstreammux name=mux batch-size=3 width=1280 height=720 live-source=1 \
nvstreamdemux name=demux  \
rtspsrc location=rtsp://192.168.100.109:554/cam0_0 ! decodebin ! queue name=postdecode_queue_0 max-size-buffers=20 leaky=0 flush-on-eos=true ! nvvideoconvert ! mux.sink_0 \
rtspsrc location=rtsp://192.168.100.109:554/cam0_1 ! decodebin ! queue name=postdecode_queue_1 max-size-buffers=20 leaky=0 flush-on-eos=true ! nvvideoconvert ! mux.sink_1 \
rtspsrc location=rtsp://192.168.100.109:554/cam0_2 ! decodebin ! queue name=postdecode_queue_2 max-size-buffers=20 leaky=0 flush-on-eos=true ! nvvideoconvert ! mux.sink_2 \
mux.src ! nvvideoconvert ! nvinfer config-file-path=../yolo/yolo.txt name=nvinfer ! demux.sink \
demux.src_0 ! queue ! nvvideoconvert !  fakesink sync=1 \
demux.src_1 ! queue ! nvvideoconvert ! fakesink sync=1 \
demux.src_2 ! queue ! nvvideoconvert ! fakesink sync=1

What I see is somewhere some frames are dropped, most probably because nvinfer can’t handle 90 FPS (3 streams x 30 FPS). The postdecode_queue_{X} never grow.

If I change the decodebin to rtph264depay ! h264parse ! avdec_h264 ! nvvideoconvert ! video/x-raw(memory:NVMM), the postdecode_queue_{X} start to grow (and I expect this behavior). The problem is, that then the hw-accelerated h264 decoder is not used. If I use the nvv4l2decoder instead of avdec_h264, the pipeline does not launch. I Simplify the pipeline to gst-launch-1.0 rtspsrc location=rtsp://192.168.100.109:554/cam0_0 ! rtph264depay ! h264parse ! nvv4l2decoder ! fakesink and get (gst-launch-1.0:25118): GStreamer-CRITICAL **: 07:43:12.443: gst_mini_object_unref: assertion 'mini_object != NULL' failed

In case with decodebin I also added a bus message listener and I expect QOS messages to appear, but there is no messages of that type (and seems that there is no messages related to frame dropping)

So, I have 2 questions:

  • how to disable frame dropping when decodebin is used
  • how to correctly run the pipeline with nvv4l2decoder instead of avdec_h264

In theory, there will be no frame drops, it will only slow down the frame processing speed. How do you know there’s a frame dropping issue?

Could you try to add a caps between h264parse and nvv4l2decoder, like h264parse ! 'video/x-h264,stream-format=byte-stream' ! nvv4l2decoder?

1 Like

It does not work for h264

gst-launch-1.0 rtspsrc location=rtsp://192.168.100.109:554/cam0_0 ! rtph264depay ! h264parse ! video/x-h264,stream-format=byte-stream ! nvv4l2decoder ! fakesink sync=1

Setting pipeline to PAUSED ...
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://192.168.100.109:554/cam0_0
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (request) SETUP stream 1
Progress: (open) Opened Stream
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261

(gst-launch-1.0:10935): GStreamer-CRITICAL **: 04:05:28.071: gst_mini_object_unref: assertion 'mini_object != NULL' failed
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261

But I have switched the camera to h265 mode ant it works out of the box
rtspsrc location=rtsp://192.168.100.109:554/cam0_0 ! rtph265depay ! h265parse ! nvv4l2decoder ! fakesink sync=1

What do you mean by slowing down? If a stream produces 30 FPS, and the performance after nvifer is 20 FPS, where is the rest? Can gstreamer slow down the camera stream (like make the camera produce 20 FPS instead of 30)?

I set up 3 streams and add FPS count before and after the nvstreammux using the src buffer callbacks or the gst-perf element. I expect to have 30 FPS (total 90) on each input stream (before inference) and some slow down after the inference (in my case, total about 70). But what I see is that FPS is the same on both postdecode_queue_0 and the queue after demux and it is about 70. That’s why I think there is some kind of drop somewhere, but I can’t understand where exactly.

Also, if I add a leaky queue before the nvstreammux, I will have total FPS 90 (as produces by the camera)

gst-launch-1.0 \
    nvstreammux name=mux batch-size=3 width=2048 height=1536 live-source=1 \
    nvstreamdemux name=demux  \
    rtspsrc location=rtsp://192.168.100.109:554/cam0_0 ! rtph265depay ! h265parse ! perf name=preinfer_0 ! queue leaky=1 flush-on-eos=true ! nvv4l2decoder ! nvvideoconvert ! mux.sink_0 \
    rtspsrc location=rtsp://192.168.100.109:554/cam0_0 ! rtph265depay ! h265parse ! perf name=preinfer_1 ! queue leaky=1 flush-on-eos=true ! nvv4l2decoder ! nvvideoconvert ! mux.sink_1 \
    rtspsrc location=rtsp://192.168.100.109:554/cam0_0 ! rtph265depay ! h265parse ! perf name=preinfer_2 ! queue leaky=1 flush-on-eos=true ! nvv4l2decoder ! nvvideoconvert ! mux.sink_2 \
    mux.src ! nvvideoconvert ! nvinfer config-file-path=..yolo/yolo.txt name=nvinfer ! demux.sink \
    demux.src_0 ! queue ! perf name=postinfer_0 ! fakesink sync=1 \
    demux.src_1 ! queue ! perf name=postinfer_1 ! fakesink sync=1 \
    demux.src_2 ! queue ! perf name=postinfer_2 ! fakesink sync=1

The output will be like

perf: preinfer_0;  ... fps: 29.981 ...
perf: postinfer_0; ... fps: 14.390 ...

But here the leaky queue is the one who explicitely drops the frames

OK. Let’s narrow down the problem.

  1. Could you just use 1 source to test the fps?
  2. You can try removing the nvinfer plugin and test the fps.

It is possible that the buffer in the rtspsrc is full and it drops packet directly. For nv-plugin, if you do not configure related parameters, it will not actively drop frames.

Thank you so much for the help!

I get 30 FPS on one source, 2x30 on 2 sources, 3x~around 20-25 on 3 sources WITH nvinfer

I get 30 FPS on one source, 2x30 on 2 sources, 3x30 on 3 sources WITHOUT nvinfer

In this case you can only improve performance by setting the interval parameter for nvinfer. This paramter specifies the number of consecutive batches to be skipped for inference.

That’s what I’m actually doing. I dynamically increase the interval when any of the the postdecode queues starts to grow constantly, and reset the interval when queues are almost empty. This way I can get average 30 FPS (without losing frames but skipping the inference sometimes).

But I still have the questions

  • How can I programmaticaly determine that rtspsrc or whatever is dropping frames? Like connect to some signal or listen to some event on event bus? I have found no way unfortunately. I tried to listen to the bus messages and looked on rtspsrc documentation. I think it’s something more gstreamer-related
  • How can I run something like rtspsrc ! rtph264depay ! h264parse ! nvv4l2decoder on deepstream 5.1 jetpack 4.5.1 on Nano (without decodebin)? It leads to crash now, while using decodebin is working fine

You can try to open more log for rtsp module to check if there are some log about dropping frame.

GST_DEBUG=3, rtpjitterbuffer:6,rtspsrc:6 gst-launch-1.0 ...

You can get the pipeline graph for decodebin first by referring to the FAQ. Then build your own pipeline based on this graph.

Unfortunately, I have found nothing related to frame drop. I even looked throgh the code and found some log entries that should be shown in case of frame drop, but they are not. Maybe the deepstream plugin version is different from the code version I looked into. Anyway, I will ask at gstreamer forum if I’m still interested in finding this out. Maybe have to dig here

Have you tried to open more log information according to my previous comment? Are there any logs related to the following popped, dropping Queue full, dropping old packet etc…?

Yes. You can also check with them to see if rtspsrc drops frames when frame processing is slow down stream.

I was looking for exactly these strings in the output file

 GST_DEBUG_FILE=./gstdebug.txt GST_DEBUG=0,rtpjitterbuffer:7,rtspsrc:7 gst-launch-1.0 ...
...
cat ./gstdebug.txt | grep -e "popped\|dropping" # <--- is empty

(I also have looked through the file in text editor =)

OK. If none of the plugins in the pipeline are dropping frames, you should see an increasing latency from the output.

Enabling drop-on-latency=1 for rtspsrc makes the element output messages like Queue full, dropping old packet to the log. But the behaviour is still the same (or I think it’s the same) - with or without drop-on-latency=1, some frames seems to disappear implicitely

So, IMO, the solution to control the framerate is to put a queue before streammux and to monitor it’s size, and if it grows - then change the nvinfer’s interval property.

I failed to find out where exactly the frames are lost/dropped (maybe I’m mistaken, but I beleive I’m not =). To find it out I think it’s necessary to spend more time on gstreamer elements source code reading (and probably debugging).

I’m going to investigate this issue later and open another thread if necessary

Thanks for helping @yuweiw

Due to the performance of your board, the fps can not be increased after the nvinfer plugin. If you just want to stabilize the frame rate, I suggest you consider using the videorate plugin. But this does not improve the performance, it just stabilizes the frame rate.

There may be no frame drop, just accumulated the latency of the rtsp stream.