Rtsp stream as an input giving generic error

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello Team,

Iam trying to run Deepstream sdk on top of the docker image which has been provided by NVIDIA.

When iam tying to run deepstream with RTSP streaming as an input iam getting the below error.

Could you please helps us in accomplisihng that ??

Docker image: nvcr.io/nvidia/ deepstream :5.0.1-20.09-triton

Error Logs:

*** DeepStream: Launched RTSP Streaming at rtsp://localhost:10001/ds-test ***

Unknown or legacy key specified ‘tlt-encode-model’ for group [property]
Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
0:00:02.064866178 10244 0x562a806eec60 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/sources/test/experiment_dir_final_600epochs/resnet18_detector.trt
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 8x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 2x34x60

0:00:02.064960032 10244 0x562a806eec60 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/sources/test/experiment_dir_final_600epochs/resnet18_detector.trt
0:00:02.072704945 10244 0x562a806eec60 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/sources/final_check/ds_configs/primary_600.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

    p: Pause
    r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:181>: Pipeline ready

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)
ERROR from src_elem0: Could not open resource for reading and writing.
Debug info: gstrtspsrc.c(7469): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstRTSPSrc:src_elem0:
Failed to connect. (Generic error)
** INFO: <reset_source_pipeline:1154>: Resetting source 0
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)
ERROR from src_elem0: Could not open resource for reading and writing.
Debug info: gstrtspsrc.c(7469): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstRTSPSrc:src_elem0:
Failed to connect. (Generic error)
**PERF: 0.00 (0.00)
**PERF: 0.00 (0.00)

Screenshot:

Can you run gst-launch command to test the available of your rtsp server?

gst-launch-1.0 rtspsrc location=rtsp://xxxxxx ! fakesink

I have tried the same and with that iam getting generic error

Error logs:

root@2d772fdbfe5f:/opt/nvidia/deepstream/deepstream-5.0# gst-launch-1.0 rtspsrc location=rtsp://10.4.254.42:554/stream1 ! fakesink
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://10.4.254.42:554/stream1
ERROR: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0: Could not open resource for reading and writing.
Additional debug info:
gstrtspsrc.c(7469): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0:
Failed to connect. (Generic error)
ERROR: pipeline doesn’t want to preroll.
Setting pipeline to PAUSED …
Setting pipeline to READY …
Setting pipeline to NULL …
Freeing pipeline …

Error Screenshot:

So it is not deepstream problem. It is just a gstreamer rtsp connection problem. The error shows rtspsrc request to access the source from server, but there is no correct response from the server. Please check your rtsp server setting first.

If you are using rtspsrc plugin to get the stream, one way is to set “debug” property of rtspsrc plugin as TRUE(rtspsrc), then you can analysis the information according to RFC 2326 - Real Time Streaming Protocol (RTSP). Any more information, please refer to the information for rtsp protocol with internet.
There are also a lot of open source rtsp analysis tools such as wireshark (Wireshark · Download) which can help to analysis the rtsp requests and response.

Thanks a lot .

But with the same rtspsrc i was able to get earlier in deepstream . And Even now i can able to get the feed using opencv (outside the deepstream container to check rtsp stream incoming).

Could you please let us know why we are facing this issue in deepstream but from opencv we can get the feed ??

The reason should be got through debugging.