Multi-stream RTSP the seconds source is stop processing without writing any error

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**Jetson
• DeepStream Version6.0
• TensorRT Version8.0.1-1+cuda10.2

I have two RTSP sources and both on them are connected after a while the second source will stop processing without appearing or writing any error to know reason of stop processing.
Attached sample of multi-steam initialization (11.7 KB).

Why the second source stop processing and the RTSP itself is connected?

Could you attach the log with GST_DEBUG=3 and update your deepstream to the latest version?
Also, you should set the is_live to true before set the streammux live-source para in your code.

I change is_live to true and run the system with GST_DEBUG mode for more than 8 hours. During this the system restart 7 times and appear these errors;
error.txt (928 Bytes)

From the log analysis, your source may generate abnormal video frames.
About the logic of reconnection, you can refer to our open source: sources\apps\apps-common\src\deepstream_source_bin.c check_rtsp_reconnection_attempts

Their is no issue from my RTSP source cause when I try a single stream their is no issue but when I try it with multi-streams (2 streams) appear all these issue.

Why multi-stream in deepstream not stable what is the issue??

Have you independently verified these two rtsp streams?


There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

OK, let’s narrow down the scope of the problem first. Could you try to use our demo code to run your rtsp sources? You can use the app below:

GST_DEBUG=3 python3 -i rtsp://xxx rtsp://xxx --pgie nvinfer -c config_infer_primary_peoplenet.txt --no-display

If an error is reported, you can attach the complete log.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.