RTSP streams working in deepstream, RTMP streams giving error: streaming stopped, reason not-linked (-1)

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**GPU
• DeepStream Version7.1
• JetPack Version (valid for Jetson only)
• TensorRT Version10.3
**• NVIDIA GPU Driver Version (valid for GPU only)**Driver Version: 572.16
• Issue Type( questions, new requirements, bugs)
**•

  • The deepstream pipeline is perfectly working on file sources and RTSP live streams, but somehow this is not working on RTMP stream url.

  • I debug the stream using this command and its working fine:
    root@AAM-LAPTOP-027:/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction# gst-launch-1.0 rtmpsrc location=rtmp://173.208.156.155:1935/live/test ! fakesink
    Setting pipeline to PAUSED …
    Pipeline is PREROLLING …
    Pipeline is PREROLLED …
    Setting pipeline to PLAYING …
    Redistribute latency…
    New clock: GstSystemClock
    0:08:02.9 / 99:99:99.

  • and when i try to run it in the deepstream pipeline here with this command here:

root@AAM-LAPTOP-027:/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction# python3 deepstream.py --source rtmp://173.208.156.155:1935/live/test --config-infer-pgie config_infer_primary_yoloV8_face.txt --config-infer-sgie-emotion emotion_classifier_sgie_config.txt --config-infer-sgie-gaze config_infer_secondary_gaze.txt

  • the output logs:
    DEBUG: Pipeline set to PLAYING
    Pipeline started…
    root@AAM-LAPTOP-027:/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction# python3 deepstream.py --source rtmp://173.208.156.155:1935/live/test --config-infer-pgie config_infer_primary_yoloV8_face.txt --config-infer-sgie-emotion emotion_classifier_sgie_config.txt --config-infer-sgie-gaze config_infer_secondary_gaze.txt

DEBUG: Source set to rtmp://173.208.156.155:1935/live/test
DEBUG: PGIE Config File: config_infer_primary_yoloV8_face.txt
DEBUG: SGIE Emotion Config File: emotion_classifier_sgie_config.txt
DEBUG: SGIE Gaze Config File: config_infer_secondary_gaze.txt
DEBUG: StreamMux Batch Size: 1
DEBUG: StreamMux Width: 1920
DEBUG: StreamMux Height: 1080
DEBUG: GPU ID: 0
DEBUG: FPS Measurement Interval: 5 sec
/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/deepstream.py:200: DeprecationWarning: Gst.Element.get_request_pad is deprecated
streammux_sink_pad = streammux.get_request_pad(pad_name)
DEBUG: Created uridecodebin for stream 0
Failed to query video capabilities: Inappropriate ioctl for device
0:00:00.094416971 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder-rtsp:sink Unable to try format: Unknown error -1
0:00:00.094448531 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2993:gst_v4l2_object_probe_caps_for_format:encoder-rtsp:sink Could not probe minimum capture size for pixelformat YM12
0:00:00.094452792 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder-rtsp:sink Unable to try format: Unknown error -1
0:00:00.094456264 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2999:gst_v4l2_object_probe_caps_for_format:encoder-rtsp:sink Could not probe maximum capture size for pixelformat YM12
0:00:00.094462921 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder-rtsp:sink Unable to try format: Unknown error -1
0:00:00.094465951 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2993:gst_v4l2_object_probe_caps_for_format:encoder-rtsp:sink Could not probe minimum capture size for pixelformat Y444
0:00:00.094468022 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder-rtsp:sink Unable to try format: Unknown error -1
0:00:00.094470772 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2999:gst_v4l2_object_probe_caps_for_format:encoder-rtsp:sink Could not probe maximum capture size for pixelformat Y444
0:00:00.094476219 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder-rtsp:sink Unable to try format: Unknown error -1
0:00:00.094479463 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2993:gst_v4l2_object_probe_caps_for_format:encoder-rtsp:sink Could not probe minimum capture size for pixelformat P410
0:00:00.094481805 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder-rtsp:sink Unable to try format: Unknown error -1
0:00:00.094484820 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2999:gst_v4l2_object_probe_caps_for_format:encoder-rtsp:sink Could not probe maximum capture size for pixelformat P410
0:00:00.094490804 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder-rtsp:sink Unable to try format: Unknown error -1
0:00:00.094493701 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2993:gst_v4l2_object_probe_caps_for_format:encoder-rtsp:sink Could not probe minimum capture size for pixelformat PM10
0:00:00.094496010 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder-rtsp:sink Unable to try format: Unknown error -1
0:00:00.094498932 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2999:gst_v4l2_object_probe_caps_for_format:encoder-rtsp:sink Could not probe maximum capture size for pixelformat PM10
0:00:00.094503196 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder-rtsp:sink Unable to try format: Unknown error -1
0:00:00.094506349 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2993:gst_v4l2_object_probe_caps_for_format:encoder-rtsp:sink Could not probe minimum capture size for pixelformat NM12
0:00:00.094508428 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder-rtsp:sink Unable to try format: Unknown error -1
0:00:00.094510862 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2999:gst_v4l2_object_probe_caps_for_format:encoder-rtsp:sink Could not probe maximum capture size for pixelformat NM12
0:00:00.094536318 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder-rtsp:src Unable to try format: Unknown error -1
0:00:00.094539622 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2993:gst_v4l2_object_probe_caps_for_format:encoder-rtsp:src Could not probe minimum capture size for pixelformat H264
0:00:00.094542139 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder-rtsp:src Unable to try format: Unknown error -1
0:00:00.094545333 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2999:gst_v4l2_object_probe_caps_for_format:encoder-rtsp:src Could not probe maximum capture size for pixelformat H264
Failed to query video capabilities: Inappropriate ioctl for device
0:00:00.094855300 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder:sink Unable to try format: Unknown error -1
0:00:00.094877011 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2993:gst_v4l2_object_probe_caps_for_format:encoder:sink Could not probe minimum capture size for pixelformat YM12
0:00:00.094881295 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder:sink Unable to try format: Unknown error -1
0:00:00.094884457 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2999:gst_v4l2_object_probe_caps_for_format:encoder:sink Could not probe maximum capture size for pixelformat YM12
0:00:00.094908816 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder:sink Unable to try format: Unknown error -1
0:00:00.094922402 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2993:gst_v4l2_object_probe_caps_for_format:encoder:sink Could not probe minimum capture size for pixelformat Y444
0:00:00.094926871 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder:sink Unable to try format: Unknown error -1
0:00:00.094930334 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2999:gst_v4l2_object_probe_caps_for_format:encoder:sink Could not probe maximum capture size for pixelformat Y444
0:00:00.094938329 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder:sink Unable to try format: Unknown error -1
0:00:00.094941345 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2993:gst_v4l2_object_probe_caps_for_format:encoder:sink Could not probe minimum capture size for pixelformat P410
0:00:00.094943784 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder:sink Unable to try format: Unknown error -1
0:00:00.094946655 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2999:gst_v4l2_object_probe_caps_for_format:encoder:sink Could not probe maximum capture size for pixelformat P410
0:00:00.094951881 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder:sink Unable to try format: Unknown error -1
0:00:00.094954856 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2993:gst_v4l2_object_probe_caps_for_format:encoder:sink Could not probe minimum capture size for pixelformat PM10
0:00:00.094957078 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder:sink Unable to try format: Unknown error -1
0:00:00.094959686 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2999:gst_v4l2_object_probe_caps_for_format:encoder:sink Could not probe maximum capture size for pixelformat PM10
0:00:00.094964363 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder:sink Unable to try format: Unknown error -1
0:00:00.094967345 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2993:gst_v4l2_object_probe_caps_for_format:encoder:sink Could not probe minimum capture size for pixelformat NM12
0:00:00.094969621 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder:sink Unable to try format: Unknown error -1
0:00:00.094972398 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2999:gst_v4l2_object_probe_caps_for_format:encoder:sink Could not probe maximum capture size for pixelformat NM12
0:00:00.094999953 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder:src Unable to try format: Unknown error -1
0:00:00.095003676 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2993:gst_v4l2_object_probe_caps_for_format:encoder:src Could not probe minimum capture size for pixelformat H264
0:00:00.095006283 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:3108:gst_v4l2_object_get_nearest_size:encoder:src Unable to try format: Unknown error -1
0:00:00.095009287 638 0x55fb59fc48e0 WARN v4l2 gstv4l2object.c:2999:gst_v4l2_object_probe_caps_for_format:encoder:src Could not probe maximum capture size for pixelformat H264
0:00:00.237208177 638 0x55fb59fc48e0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 3]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/gaze_estimation.onnx_b1_gpu0_fp16.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0

0:00:00.237282096 638 0x55fb59fc48e0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 3]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/gaze_estimation.onnx_b1_gpu0_fp16.engine
0:00:00.249474415 638 0x55fb59fc48e0 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 3]: Load new model:config_infer_secondary_gaze.txt sucessfully
0:00:00.253930802 638 0x55fb59fc48e0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/emotion_classifier_transposed.onnx_b1_gpu0_fp16.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0

0:00:00.253991939 638 0x55fb59fc48e0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/emotion_classifier_transposed.onnx_b1_gpu0_fp16.engine
0:00:00.255380110 638 0x55fb59fc48e0 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 2]: Load new model:emotion_classifier_sgie_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:00.299675408 638 0x55fb59fc48e0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/yolov8n-face.onnx_b1_gpu0_fp32.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0

0:00:00.299742700 638 0x55fb59fc48e0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/yolov8n-face.onnx_b1_gpu0_fp32.engine
0:00:00.302419237 638 0x55fb59fc48e0 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_yoloV8_face.txt sucessfully
DEBUG: Pipeline set to PLAYING
Pipeline started…
0:00:01.830544466 638 0x7f55dc001e80 FIXME rtmpconnection rtmpconnection.c:869:gst_rtmp_connection_handle_protocol_control:GstRtmpConnection@0x7f5590011100 set peer bandwidth: 5000000, 2
0:00:02.871027582 638 0x7f55dc002b70 WARN uridecodebin gsturidecodebin.c:960:unknown_type_cb: warning: No decoder available for type ‘audio/mpeg, mpegversion=(int)4, framed=(boolean)true, stream-format=(string)raw, rate=(int)44100, channels=(int)2, codec_data=(buffer)1210, level=(string)2, base-profile=(string)lc, profile=(string)lc’.
WARNING: gst-stream-error-quark: No decoder available for type ‘audio/mpeg, mpegversion=(int)4, framed=(boolean)true, stream-format=(string)raw, rate=(int)44100, channels=(int)2, codec_data=(buffer)1210, level=(string)2, base-profile=(string)lc, profile=(string)lc’. (6): …/gst/playback/gsturidecodebin.c(960): unknown_type_cb (): /GstPipeline:deepstream-combined-pipeline/GstURIDecodeBin:source-bin-0000
0:00:03.163165991 638 0x7f55dc0022d0 WARN basesrc gstbasesrc.c:3127:gst_base_src_loop: error: Internal data stream error.
0:00:03.163228065 638 0x7f55dc0022d0 WARN basesrc gstbasesrc.c:3127:gst_base_src_loop: error: streaming stopped, reason not-linked (-1)
ERROR: gst-stream-error-quark: Internal data stream error. (1): …/libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPipeline:deepstream-combined-pipeline/GstURIDecodeBin:source-bin-0000/GstRtmp2Src:source:
streaming stopped, reason not-linked (-1)
DEBUG: EOS sent, waiting for finalization…

  • here is the pipeline code:

deepstream.txt (40.4 KB)

  • i would really appreciate you to kindly help me out why this is not working with RTMP streams and how can we make it work with them.
    Thanks in advance.

from the log, using rtmpsrc input rtmp stream works, but the code you shared is using uridecodebin to input . can the following cmd work?

gst-launch-1.0 uridecodebin uri=rtmp://173.208.156.155:1935/live/test  ! autovideosink
1 Like

*Apologies for late response, i was not active previous weekend and thank you @fanzh for replying, i will let you know after testing this above command.

@fanzh

  • here are the logs after running this below command:

  • gst-launch-1.0 uridecodebin uri=rtmp://173.208.156.155:1935/live/test ! autovideosink

  • root@AAM-LAPTOP-027:/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction# gst-launch-1.0 uridecodebin uri=rtmp://localhost/live/stream ! autovideosink
    Setting pipeline to PAUSED …
    Pipeline is PREROLLING …
    Got context from element ‘autovideosink0-actual-sink-nveglgles’: gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
    Setting pipeline to PLAYING …
    Buffering, setting pipeline to PAUSED …
    Done buffering, setting pipeline to PLAYING …
    Buffering, setting pipeline to PAUSED …
    Missing element: Sorenson Spark Video decoder
    WARNING: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0: No decoder available for type ‘video/x-flash-video, flvversion=(int)1, width=(int)1280, height=(int)720, framerate=(fraction)29/1’.
    Additional debug info:
    …/gst/playback/gsturidecodebin.c(960): unknown_type_cb (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0
    WARNING: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0: Delayed linking failed.
    Additional debug info:
    gst/parse/grammar.y(540): gst_parse_no_more_pads (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0:
    failed delayed linking some pad of GstURIDecodeBin named uridecodebin0 to some pad of GstAutoVideoSink named autovideosink0
    Done buffering, setting pipeline to PLAYING …
    ERROR: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRtmp2Src:source: Internal data stream error.
    Additional debug info:
    …/libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRtmp2Src:source:
    streaming stopped, reason not-linked (-1)
    Execution ended after 0:00:00.269218790
    Setting pipeline to NULL …
    Freeing pipeline …

  • The stream is perfectly working while im testing it in vlc media player, but it i not working in the pipeline where as the rtsp streams are working without any issue.

@fanzh

  • waiting for your guidance now.

from the log, there is no decoder to decode the video. “gst-launch-1.0 rtmpsrc location=rtmp://173.208.156.155:1935/live/test ! fakesink” just received the stream, but did not decode. please refer to this topic.

1 Like

@fanzh the thing here fanzh is that im trying everything but still not find a suitable way to work with rtmp streams whereas the rtsp and source files are working fine.

there is nothing in deepstream docs or any reference examples for rtmp, so im not sure where else to look, so i would really appreciate if you can help me debug and solve this.

here is my deepstream code which works perfectly fine for rtsp and source files:

deepstream.txt (40.4 KB)

@fanzh waiting for your guidance

Noting VLC can play the video, could you use VLC to get video compressed format? please click Tools->Codec information On the VLC UI, then share the screenshot of codec information.

1 Like

yes here is the screenshot from vlc media player where the rtmp stream is running perfectly:

NV plugin nvv4l2decoder does not support fvl1, which is not a common video codec. you can change the video codec to h264 or other common format. Or Could you go to gstreamer forum to get further suggestion? You may ask with software decoder for flv . Once you get a working pipeline, it should work fine to replace with hardware decoder.

1 Like

this is here my codec information:

  • after running this command :

  • gst-launch-1.0 uridecodebin uri=rtmp://173.208.156.155:1935/live/test ! autovideosink

  • root@AAM-LAPTOP-027:/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction# gst-launch-1.0 uridecodebin uri=rtmp://173.208.156.155:1935/live/test ! autovideosink
    Setting pipeline to PAUSED …
    Pipeline is PREROLLING …
    Got context from element ‘autovideosink0-actual-sink-nveglgles’: gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
    buffering… 0%

  • and while running the pipeline command, the pipeline is just stuck and not displaying or showing logs:

  • root@AAM-LAPTOP-027:/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction# python3 deepstream.py --source rtmp://172.30.5.232:1935/live/test --config-infer-pgie config_infer_primary_yoloV8_face.txt --config-infer-sgie-emotion emotion_classifier_sgie_config.txt --config-infer-sgie-gaze config_infer_secondary_gaze.txt
    DEBUG: Source set to rtmp://172.30.5.232:1935/live/test
    DEBUG: PGIE Config File: config_infer_primary_yoloV8_face.txt
    DEBUG: SGIE Emotion Config File: emotion_classifier_sgie_config.txt
    DEBUG: SGIE Gaze Config File: config_infer_secondary_gaze.txt
    DEBUG: StreamMux Batch Size: 1
    DEBUG: StreamMux Width: 1920
    DEBUG: StreamMux Height: 1080
    DEBUG: GPU ID: 0
    DEBUG: FPS Measurement Interval: 5 sec
    /opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/deepstream.py:200: DeprecationWarning: Gst.Element.get_request_pad is deprecated
    streammux_sink_pad = streammux.get_request_pad(pad_name)
    DEBUG: Created uridecodebin for stream 0
    Failed to query video capabilities: Inappropriate ioctl for device
    Failed to query video capabilities: Inappropriate ioctl for device
    0:00:00.276729963 4889 0x55fee30ee7e0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 3]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/gaze_estimation.onnx_b1_gpu0_fp16.engine
    Implicit layer support has been deprecated
    INFO: [Implicit Engine Info]: layers num: 0

0:00:00.276835812 4889 0x55fee30ee7e0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 3]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/gaze_estimation.onnx_b1_gpu0_fp16.engine
0:00:00.286667733 4889 0x55fee30ee7e0 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 3]: Load new model:config_infer_secondary_gaze.txt sucessfully
0:00:00.291411711 4889 0x55fee30ee7e0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/emotion_classifier_transposed.onnx_b1_gpu0_fp16.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0

0:00:00.291469226 4889 0x55fee30ee7e0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/emotion_classifier_transposed.onnx_b1_gpu0_fp16.engine
0:00:00.292070155 4889 0x55fee30ee7e0 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 2]: Load new model:emotion_classifier_sgie_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:00.339749914 4889 0x55fee30ee7e0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/yolov8n-face.onnx_b1_gpu0_fp32.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0

0:00:00.339805881 4889 0x55fee30ee7e0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/yolov8n-face.onnx_b1_gpu0_fp32.engine
0:00:00.342391101 4889 0x55fee30ee7e0 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_yoloV8_face.txt sucessfully
DEBUG: Pipeline set to PLAYING
Pipeline started…

@fanzh even though now the code format is accurate, but still im not able to run them in pipeline, where the rtsp runs fine.

kindly need you guidance further to solve this matter here.

and here are the logs with GST_DEBUG=5 enables here:
0:01:07.484230576 4958 0x7f6490001e80 DEBUG rtmpconnection rtmpconnection.c:483:gst_rtmp_connection_input_ready:GstRtmpConnection@0x7f644c017550 read IO error 24 Socket I/O timed out, continuing

  1. if using the following cmd, can you see the output video?
gst-launch-1.0 uridecodebin uri=rtmp://173.208.156.155:1935/live/test  ! autovideosink
  1. could you simplify the code to narrow this issue? for example, if using “source->nvstreammux->nveglglessink” pipeline, can you see the video? if using “source->nvstreammux->pgie->nveglglessink”, can you see the video?
1 Like

here is what im getting while running the above command:
root@AAM-LAPTOP-027:/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction# gst-launch-1.0 uridecodebin uri=rtmp://173.208.156.155:1935/live/test ! autovideosink
Setting pipeline to PAUSED …
Pipeline is PREROLLING …
Got context from element ‘autovideosink0-actual-sink-nveglgles’: gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Setting pipeline to PLAYING …
Buffering, setting pipeline to PAUSED …
Missing element: MPEG-4 AAC decoder
WARNING: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0: No decoder available for type ‘audio/mpeg, mpegversion=(int)4, framed=(boolean)true, stream-format=(string)raw, rate=(int)44100, channels=(int)2, codec_data=(buffer)1210, level=(string)2, base-profile=(string)lc, profile=(string)lc’.
Additional debug info:
…/gst/playback/gsturidecodebin.c(960): unknown_type_cb (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0
Done buffering, setting pipeline to PLAYING …
ERROR: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRtmp2Src:source: Internal data stream error.
Additional debug info:
…/libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRtmp2Src:source:
streaming stopped, reason not-linked (-1)
Execution ended after 0:00:00.294430186
Setting pipeline to NULL …
Freeing pipeline …

  • when i try to run this command here:

  • gst-launch-1.0 rtmpsrc location=rtmp://173.208.156.155:1935/live/test ! fakesink

  • root@AAM-LAPTOP-027:/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction# gst-launch-1.0 rtmpsrc location=rtmp://173.208.156.155:1935/live/test ! fakesink
    Setting pipeline to PAUSED …
    Pipeline is PREROLLING …
    Pipeline is PREROLLED …
    Setting pipeline to PLAYING …
    Redistribute latency…
    New clock: GstSystemClock
    0:38:26.9 / 99:99:99.

@fanzh waiting for your reply

Noticing the audio can’t be decoded. could you disable audio of stream, then try the cmd in my last comment again?

1 Like
  • i tried to remove the audio from the stream and here is my rtmp stream codec information:

and then when i try to run this command here:

  • gst-launch-1.0 uridecodebin uri=rtmp://localhost/live/stream ! autovideosink

im getting these logs:

  • root@AAM-LAPTOP-027:/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction# gst-launch-1.0 uridecodebin uri=rtmp://localhost/live/test ! autovideosink
    Setting pipeline to PAUSED …
    Pipeline is PREROLLING …
    Got context from element ‘autovideosink0-actual-sink-nveglgles’: gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
    buffering… 0%