Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 7.1
• JetPack Version (valid for Jetson only)
• TensorRT Version10.3
**• NVIDIA GPU Driver Version (valid for GPU only)**Driver Version: 572.16
• Issue Type( questions, new requirements, bugs)
I have successfully configured the pipeline, which includes PGIE for face detection, SGIE1 for emotion detection, SGIE2 for gaze prediction, CSV logging for per-second data, and outputs the processed video to a file.
However, I am currently facing an issue when attempting to configure the pipeline with RTSP live streams. The pipeline works perfectly fine with a file source (using the file:///
URI), but when using the RTSP stream, it fails to function correctly. I am using VST to generate the RTSP link, and I would greatly appreciate any guidance from someone with experience in this area to help identify why the RTSP stream isn’t working and how to resolve the issue.
Your assistance would be highly appreciated.
here is my deepstream pipeline code:
deepstream.txt (37.4 KB)
and here is the output logs im getting:
root@AAM-LAPTOP-027:/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction# GST_DEBUG=rtspsrc:5 python3 deepstream.py --source rtsp://173.208.156.155:8555/live/a1b9e807-a6c1-41ab-a9d6-ec3cd6bb4b9b --config-infer-pgie config_infer_primary_yoloV8_face.txt --config-infer-sgie-emotion emotion_classifier_sgie_config.txt --config-infer-sgie-gaze config_infer_secondary_gaze.txt
DEBUG: Source set to rtsp://173.208.156.155:8555/live/a1b9e807-a6c1-41ab-a9d6-ec3cd6bb4b9b
DEBUG: PGIE Config File: config_infer_primary_yoloV8_face.txt
DEBUG: SGIE Emotion Config File: emotion_classifier_sgie_config.txt
DEBUG: SGIE Gaze Config File: config_infer_secondary_gaze.txt
DEBUG: StreamMux Batch Size: 1
DEBUG: StreamMux Width: 1920
DEBUG: StreamMux Height: 1080
DEBUG: GPU ID: 0
DEBUG: FPS Measurement Interval: 5 sec
/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/deepstream.py:186: DeprecationWarning: Gst.Element.get_request_pad is deprecated
streammux_sink_pad = streammux.get_request_pad(pad_name)
0:00:00.228247738 3170 0x5611cb0fa620 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 3]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/gaze_estimation.onnx_b1_gpu0_fp16.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0
0:00:00.228332536 3170 0x5611cb0fa620 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 3]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/gaze_estimation.onnx_b1_gpu0_fp16.engine
0:00:00.235451835 3170 0x5611cb0fa620 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 3]: Load new model:config_infer_secondary_gaze.txt sucessfully
0:00:00.240188849 3170 0x5611cb0fa620 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/emotion_classifier_transposed.onnx_b1_gpu0_fp16.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0
0:00:00.240251617 3170 0x5611cb0fa620 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/emotion_classifier_transposed.onnx_b1_gpu0_fp16.engine
0:00:00.240889039 3170 0x5611cb0fa620 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 2]: Load new model:emotion_classifier_sgie_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:00.288550449 3170 0x5611cb0fa620 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/yolov8n-face.onnx_b1_gpu0_fp32.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0
0:00:00.288600217 3170 0x5611cb0fa620 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/yolov8n-face.onnx_b1_gpu0_fp32.engine
0:00:00.290870217 3170 0x5611cb0fa620 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_yoloV8_face.txt sucessfully
0:00:00.292898072 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:9451:gst_rtspsrc_uri_set_uri:
0:00:00.292929358 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:9458:gst_rtspsrc_uri_set_uri:
0:00:00.292935947 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:9474:gst_rtspsrc_uri_set_uri:
0:00:00.292963185 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:9475:gst_rtspsrc_uri_set_uri:
0:00:00.293162896 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:9202:gst_rtspsrc_start:
0:00:00.293195765 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:6137:gst_rtspsrc_loop_send_cmd:
0:00:00.293209297 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:6174:gst_rtspsrc_loop_send_cmd:
0:00:00.293396610 3170 0x7fc0dc001a40 DEBUG rtspsrc gstrtspsrc.c:9149:gst_rtspsrc_thread:
0:00:00.293422332 3170 0x7fc0dc001a40 DEBUG rtspsrc gstrtspsrc.c:5284:gst_rtspsrc_connection_flush:
0:00:00.293429283 3170 0x7fc0dc001a40 DEBUG rtspsrc gstrtspsrc.c:5147:gst_rtsp_conninfo_connect:
0:00:00.293529272 3170 0x7fc0dc001a40 DEBUG rtspsrc gstrtspsrc.c:5158:gst_rtsp_conninfo_connect:
0:00:00.293716617 3170 0x7fc0dc001a40 DEBUG rtspsrc gstrtspsrc.c:5195:gst_rtsp_conninfo_connect:
0:00:00.294374145 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:6137:gst_rtspsrc_loop_send_cmd:
0:00:00.294392615 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:6162:gst_rtspsrc_loop_send_cmd:
0:00:00.294396259 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:6174:gst_rtspsrc_loop_send_cmd:
0:00:00.294429637 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:6137:gst_rtspsrc_loop_send_cmd:
0:00:00.294443781 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:6174:gst_rtspsrc_loop_send_cmd:
DEBUG: Pipeline set to PLAYING
Pipeline started…
Aborted (core dumped)
i even tested the stream separately and it is working fine:
root@AAM-LAPTOP-027:/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction# gst-launch-1.0 rtspsrc location=rtsp://173.208.156.155:8555/live/a1b9e807-a6c1-41ab-a9d6-ec3cd6bb4b9b latency=300 ! decodebin ! autovideosink
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Got context from element ‘autovideosink0-actual-sink-nveglgles’: gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Progress: (open) Opening Stream
Pipeline is PREROLLED …
Prerolled, waiting for progress to finish…
Progress: (connect) Connecting to rtsp://173.208.156.155:8555/live/a1b9e807-a6c1-41ab-a9d6-ec3cd6bb4b9b
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (open) Opened Stream
Setting pipeline to PLAYING …
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Redistribute latency…
Progress: (request) Sending PLAY request
Redistribute latency…
Progress: (request) Sent PLAY request
WARNING: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0: Could not read from resource.
Additional debug info:
…/gst/rtsp/gstrtspsrc.c(5964): gst_rtspsrc_reconnect (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0:
Could not receive any UDP packets for 5.0000 seconds, maybe your firewall is blocking it. Retrying using a tcp connection.
Redistribute latency…
Failed to query video capabilities: Inappropriate ioctl for device
Redistribute latency…
3:16:15.5 / 99:99:99.
Waiting for help here…