When adding inference in demux pipeline, got 0 fps after some time

Please provide complete information as applicable to your setup.

• Hardware Platform DGPU
• DeepStream Version 6.0.1
• NVIDIA GPU Driver Version 510.85.02
• Issue Type( questions, new requirements, bugs)
when adding inference in demux pipeline, got 0 fps after 10-20 seconds, not every time, but some time

bug
NVMEDIA: NVMEDIABufferProcessing: 1099: Consume the extra signalling for EOS

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
my pipeline
streammux->nvvideoconvert->capsfilter->inference->demux->queue->nvvideoconvert->valve->nv4l2h264enc->h264parse->avimux->filesink

another pipeline
streammux->inference->nvvideoconvert->capsfilter->demux->queue->tee->
recording_queue->nvvideoconvert->valve->nv4l2h264enc->h264parse->avimux->filesink
display_queue->fakesink

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I need to add inference in demux pipeline without any issues

How many input sources for your case? Is the source RTSP or local file or v4l2 camera device?

4 rtsp input sources

this is my dynamic pipeline

streammux->inference->nvvideoconvert->capsfilter->demux->queue->tee->
recording_queue->nvvideoconvert->valve->nv4l2h264enc->h264parse->avimux->filesink
display_queue->fakesink

when I add nvinfer after streammux, I got 0 fps

Does it always happen with nvinfer? What is the “dynamic” mean?

The EOS is not a bug. You need to check which operation in your app cause the source to generate EOS.

Does it always happen with nvinfer?
yes when I add nvinfer it happens, otherwise doesn’t

What is the “dynamic” mean?
Dynamic means one pipeline for recording video and another one for display

nvinfer will not cause EOS. EOS is mostly sent from src

Can you show the complete graph of the two pipelines?

pipeline.dot (40.0 KB)

issue screenshots

I’ve tried the following pipeline which is exactly as yours, it works well with latest DeepStream 6.1.1
gst-launch-1.0 uridecodebin uri=rtsp://10.19.225.116/media/video1 ! mux.sink_0 nvstreammux width=1280 height=720 live-source=TRUE batch-size=2 name=mux ! tee name=t t.src_0 ! queue ! nvstreamdemux name=demux demux.src_0 ! queue ! nvvideoconvert ! valve ! nvv4l2h264enc bitrate=800000 ! h264parse ! avimux ! filesink location=./test1.avi uridecodebin uri=rtsp://10.19.225.174/media/video1 ! mux.sink_1 demux.src_1 ! queue ! nvvideoconvert ! valve ! nvv4l2h264enc bitrate=800000 ! h264parse ! avimux ! filesink location=./test2.avi t.src_1 ! queue ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt interval=2 ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvmultistreamtiler columns=2 rows=1 width=1280 height=720 ! fakesink

Please update your DeepStream to latest version.
`
Please check your RTSP server for the “EOS” is caused by the server but not the client(the DeepStream app).

yes initially it’s working but after some time (15-30 minutes) I got 0 fps,

I also attached screenshot.

can you please explain why I got below line?
NVMEDIA: NVMEDIABufferProcessing: 1099: Consume the extra signalling for EOS

Please check your RTSP server for the “EOS” is caused by the server but not the client(the DeepStream app).

DeepStream 6.1.1 also same problem

Please check your RTSP server for the “EOS” is caused by the server but not the client(the DeepStream app).
how to check?

also what happens when we add more functions in probe? is it affect performance

It is related to your RTSP server, you can use “export GST_DEBUG=rtspsrc:5,v4l2videodec:5,v4l2videoenc:5” to get more log.

Either you debug the RTSP connection with some RTSP tools or you can ask the vendor for help. rtspsrc is compatible to RFC 2326: Real Time Streaming Protocol (RTSP) (rfc-editor.org)

The probe function will block the GstBuffer, the more work in the probe function, the worse performance.

log.txt (171.3 KB)

From log, the gst_rtspsrc_handle_src_query disappears since the time “0:00:10.732959968”, so there is no video data sent from the server from that time. Please ask the vendor why the RTSP server stopped to send data. DeepStream app is just a client, we can never know what happens to the server with DeepStream app.

can you check this log?

log_.txt (1.4 MB)

I used another dvr, but same problem occurs

This is different error. The encoder stopped to work for some error. You may need more log to find out what kind of error it is. Maybe you need “export GST_DEBUG=rtspsrc:5,v4l2videodec:5,v4l2videoenc:7”

I add motion detection in probe function but it will affect performance and crash deepstream

how to add motion detection in deepstream without affecting the performance?