Deepstream 5.0 sample deepstream-imagedata-multistream cannot play some RTSP source

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

I set up a RTSP stream server on a local linux machine, the stream rtsp://localhost:8554/mystream can be played by VLC. Yet when I tried running below command, it failed.

apps/deepstream-imagedata-multistream# ./deepstream_imagedata-multistream.py rtsp://localhost:8554/mystream frames

The error messages are:

Frames will be saved in frames
Creating Pipeline

Creating streamux

Creating source_bin 0

Creating source bin
source-bin-00
Creating Pgie

Creating nvvidconv1

Creating filter1

Creating tiler

Creating nvvidconv

Creating nvosd

Creating EGLSink

Atleast one of the sources is live
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Adding elements to Pipeline

Linking elements in the Pipeline

Now playing…
1 : rtsp://localhost:8554/mystream
Starting pipeline

0:00:00.832362411 3901 0x214dca0 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
Warning, setting batch size to 1. Update the dimension after parsing due to using explicit batch size.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.

INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: …/nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Detected 1 inputs and 2 output network tensors.
0:00:12.890046450 3901 0x214dca0 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1624> [UID = 1]: serialize cuda engine to file: /root/deepstream-python-dgpu/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine successfully
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640 min: 1x3x368x640 opt: 1x3x368x640 Max: 1x3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40 min: 0 opt: 0 Max: 0

0:00:12.905516653 3901 0x214dca0 INFO nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus: [UID 1]: Load new model:dstest_imagedata_config.txt sucessfully
Decodebin child added: source

Error: gst-resource-error-quark: Could not write to resource. (10): gstrtspsrc.c(7671): gst_rtspsrc_close (): /GstPipeline:pipeline0/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin/GstRTSPSrc:source:
> Could not send message. (Received end-of-file)
Exiting app

I tested other RTSP source like rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa, it worked properly.

Anyone can please shed some light?

Tried googling above error messages, no luck yet.

Can you try “gst-launch-1.0 uridecodebin uri=rtsp://xxx” before using deepstream? If it does not work, there is some problems with your rtsp server.

Thanks @Fiona.Chen. Below is the output of gst-launch-1.0.

gst-launch-1.0 uridecodebin uri=rtsp://localhost:8554/mystream

Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://172.20.9.53:8554/mystream
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (open) Opened Stream
Setting pipeline to PLAYING …
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
ERROR: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source: Could not write to resource.
Additional debug info:
gstrtspsrc.c(7671): gst_rtspsrc_close (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source:
Could not send message. (Received end-of-file)
Execution ended after 0:00:05.013390514
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …

gst-launch-1.0 rtspsrc location=rtsp://localhost:8554/mystream ! rtph264depay ! decodebin ! autovideosink

Setting pipeline to PAUSED …
PuTTY X11 proxy: Authorisation not recognised
PuTTY X11 proxy: Authorisation not recognised
PuTTY X11 proxy: Authorisation not recognised
PuTTY X11 proxy: Authorisation not recognised
PuTTY X11 proxy: Authorisation not recognised
error: XDG_RUNTIME_DIR not set in the environment.
Pipeline is live and does not need PREROLL …
WARNING: from element /GstPipeline:pipeline0/GstAutoVideoSink:autovideosink0: Could not open DRM module (NULL)
Additional debug info:
gstkmssink.c(710): gst_kms_sink_start (): /GstKMSSink:autovideosink0-actual-sink-kms:
reason: No such file or directory (2)
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://172.20.9.53:8554/mystream
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (open) Opened Stream
Setting pipeline to PLAYING …
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Progress: (request) Sending PLAY request
Progress: (request) Sent PLAY request
ERROR: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0: Could not write to resource.
Additional debug info:
gstrtspsrc.c(7671): gst_rtspsrc_close (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0:
Could not send message. (Received end-of-file)
Execution ended after 0:00:05.026377818
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …

I’m using below to set up the rtsp server. GitHub - aler9/rtsp-simple-server: ready-to-use RTSP / RTMP / LL-HLS server and proxy that allows to read, publish and proxy video and audio streams
./rtsp-simple-server
ffmpeg -re -stream_loop -1 -i test.mp4 -c copy -f rtsp rtsp://localhost:8554/mystream

Usually, how do you set up a RTSP server to stream local mp4 file to a RTSP stream?

From the log, server send “end-of-file” after received client’s request. That means it refused the request.

You may need to debug the content of the message between client and server to reach to the root cause. It has nothing to do with Deepstream.