Deepstream Pipeline not Working with RTSP Streams

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 7.1
• JetPack Version (valid for Jetson only)
• TensorRT Version10.3
**• NVIDIA GPU Driver Version (valid for GPU only)**Driver Version: 572.16
• Issue Type( questions, new requirements, bugs)

I have successfully configured the pipeline, which includes PGIE for face detection, SGIE1 for emotion detection, SGIE2 for gaze prediction, CSV logging for per-second data, and outputs the processed video to a file.

However, I am currently facing an issue when attempting to configure the pipeline with RTSP live streams. The pipeline works perfectly fine with a file source (using the file:/// URI), but when using the RTSP stream, it fails to function correctly. I am using VST to generate the RTSP link, and I would greatly appreciate any guidance from someone with experience in this area to help identify why the RTSP stream isn’t working and how to resolve the issue.

Your assistance would be highly appreciated.

here is my deepstream pipeline code:
deepstream.txt (37.4 KB)

and here is the output logs im getting:
root@AAM-LAPTOP-027:/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction# GST_DEBUG=rtspsrc:5 python3 deepstream.py --source rtsp://173.208.156.155:8555/live/a1b9e807-a6c1-41ab-a9d6-ec3cd6bb4b9b --config-infer-pgie config_infer_primary_yoloV8_face.txt --config-infer-sgie-emotion emotion_classifier_sgie_config.txt --config-infer-sgie-gaze config_infer_secondary_gaze.txt
DEBUG: Source set to rtsp://173.208.156.155:8555/live/a1b9e807-a6c1-41ab-a9d6-ec3cd6bb4b9b
DEBUG: PGIE Config File: config_infer_primary_yoloV8_face.txt
DEBUG: SGIE Emotion Config File: emotion_classifier_sgie_config.txt
DEBUG: SGIE Gaze Config File: config_infer_secondary_gaze.txt
DEBUG: StreamMux Batch Size: 1
DEBUG: StreamMux Width: 1920
DEBUG: StreamMux Height: 1080
DEBUG: GPU ID: 0
DEBUG: FPS Measurement Interval: 5 sec
/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/deepstream.py:186: DeprecationWarning: Gst.Element.get_request_pad is deprecated
streammux_sink_pad = streammux.get_request_pad(pad_name)
0:00:00.228247738 3170 0x5611cb0fa620 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 3]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/gaze_estimation.onnx_b1_gpu0_fp16.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0

0:00:00.228332536 3170 0x5611cb0fa620 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 3]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/gaze_estimation.onnx_b1_gpu0_fp16.engine
0:00:00.235451835 3170 0x5611cb0fa620 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 3]: Load new model:config_infer_secondary_gaze.txt sucessfully
0:00:00.240188849 3170 0x5611cb0fa620 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/emotion_classifier_transposed.onnx_b1_gpu0_fp16.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0

0:00:00.240251617 3170 0x5611cb0fa620 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/emotion_classifier_transposed.onnx_b1_gpu0_fp16.engine
0:00:00.240889039 3170 0x5611cb0fa620 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 2]: Load new model:emotion_classifier_sgie_config.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvMultiObjectTracker] Initialized
0:00:00.288550449 3170 0x5611cb0fa620 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/yolov8n-face.onnx_b1_gpu0_fp32.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0

0:00:00.288600217 3170 0x5611cb0fa620 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction/yolov8n-face.onnx_b1_gpu0_fp32.engine
0:00:00.290870217 3170 0x5611cb0fa620 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_yoloV8_face.txt sucessfully
0:00:00.292898072 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:9451:gst_rtspsrc_uri_set_uri: parsing URI
0:00:00.292929358 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:9458:gst_rtspsrc_uri_set_uri: configuring URI
0:00:00.292935947 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:9474:gst_rtspsrc_uri_set_uri: set uri: rtsp://173.208.156.155:8555/live/a1b9e807-a6c1-41ab-a9d6-ec3cd6bb4b9b
0:00:00.292963185 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:9475:gst_rtspsrc_uri_set_uri: request uri is: rtsp://173.208.156.155:8555/live/a1b9e807-a6c1-41ab-a9d6-ec3cd6bb4b9b
0:00:00.293162896 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:9202:gst_rtspsrc_start: starting
0:00:00.293195765 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:6137:gst_rtspsrc_loop_send_cmd: sending cmd OPEN
0:00:00.293209297 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:6174:gst_rtspsrc_loop_send_cmd: not interrupting busy cmd unknown
0:00:00.293396610 3170 0x7fc0dc001a40 DEBUG rtspsrc gstrtspsrc.c:9149:gst_rtspsrc_thread: got command OPEN
0:00:00.293422332 3170 0x7fc0dc001a40 DEBUG rtspsrc gstrtspsrc.c:5284:gst_rtspsrc_connection_flush: set flushing 0
0:00:00.293429283 3170 0x7fc0dc001a40 DEBUG rtspsrc gstrtspsrc.c:5147:gst_rtsp_conninfo_connect: creating connection (rtsp://173.208.156.155:8555/live/a1b9e807-a6c1-41ab-a9d6-ec3cd6bb4b9b)…
0:00:00.293529272 3170 0x7fc0dc001a40 DEBUG rtspsrc gstrtspsrc.c:5158:gst_rtsp_conninfo_connect: sanitized uri rtsp://173.208.156.155:8555/live/a1b9e807-a6c1-41ab-a9d6-ec3cd6bb4b9b
0:00:00.293716617 3170 0x7fc0dc001a40 DEBUG rtspsrc gstrtspsrc.c:5195:gst_rtsp_conninfo_connect: connecting (rtsp://173.208.156.155:8555/live/a1b9e807-a6c1-41ab-a9d6-ec3cd6bb4b9b)…
0:00:00.294374145 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:6137:gst_rtspsrc_loop_send_cmd: sending cmd WAIT
0:00:00.294392615 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:6162:gst_rtspsrc_loop_send_cmd: cancel previous request LOOP
0:00:00.294396259 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:6174:gst_rtspsrc_loop_send_cmd: not interrupting busy cmd OPEN
0:00:00.294429637 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:6137:gst_rtspsrc_loop_send_cmd: sending cmd PLAY
0:00:00.294443781 3170 0x5611cb0fa620 DEBUG rtspsrc gstrtspsrc.c:6174:gst_rtspsrc_loop_send_cmd: not interrupting busy cmd OPEN
DEBUG: Pipeline set to PLAYING
Pipeline started…
Aborted (core dumped)

i even tested the stream separately and it is working fine:
root@AAM-LAPTOP-027:/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction# gst-launch-1.0 rtspsrc location=rtsp://173.208.156.155:8555/live/a1b9e807-a6c1-41ab-a9d6-ec3cd6bb4b9b latency=300 ! decodebin ! autovideosink
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Got context from element ‘autovideosink0-actual-sink-nveglgles’: gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Progress: (open) Opening Stream
Pipeline is PREROLLED …
Prerolled, waiting for progress to finish…
Progress: (connect) Connecting to rtsp://173.208.156.155:8555/live/a1b9e807-a6c1-41ab-a9d6-ec3cd6bb4b9b
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (open) Opened Stream
Setting pipeline to PLAYING …
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Redistribute latency…
Progress: (request) Sending PLAY request
Redistribute latency…
Progress: (request) Sent PLAY request
WARNING: from element /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0: Could not read from resource.
Additional debug info:
…/gst/rtsp/gstrtspsrc.c(5964): gst_rtspsrc_reconnect (): /GstPipeline:pipeline0/GstRTSPSrc:rtspsrc0:
Could not receive any UDP packets for 5.0000 seconds, maybe your firewall is blocking it. Retrying using a tcp connection.
Redistribute latency…
Failed to query video capabilities: Inappropriate ioctl for device
Redistribute latency…
3:16:15.5 / 99:99:99.

Waiting for help here…

Since your code is using uridecodebin, can the following cmd run well? if not, could you share the running log? if yes, please simplify the pipeline first, wondering if the code caused the error. for example, can “source → nvstreammux ->fakesink” run well?

gst-launch-1.0 uridecodebin uri=rtsp://173.208.156.155:8555/live/a1b9e807-a6c1-41ab-a9d6-ec3cd6bb4b9b  ! autovideosink
1 Like

here is the sample of the output logs:
sample_output_logs.txt (46.8 KB)

and these are the logs for the command you provided:

root@AAM-LAPTOP-027:/opt/nvidia/deepstream/deepstream-7.1/sources/deepstream_python_apps/Backup_Deepstream_Gaze_Prediction# gst-launch-1.0 uridecodebin uri=rtsp://173.208.156.155:8555/live/a1b9e807-a6c1-41ab-a9d6-ec3cd6bb4b9b ! autovideosink
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Got context from element ‘autovideosink0-actual-sink-nveglgles’: gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Progress: (open) Opening Stream
Pipeline is PREROLLED …
Prerolled, waiting for progress to finish…
Progress: (connect) Connecting to rtsp://173.208.156.155:8555/live/a1b9e807-a6c1-41ab-a9d6-ec3cd6bb4b9b
Progress: (open) Retrieving server options
Progress: (open) Retrieving media info
Progress: (request) SETUP stream 0
Progress: (open) Opened Stream
Setting pipeline to PLAYING …
New clock: GstSystemClock
Progress: (request) Sending PLAY request
Redistribute latency…
Progress: (request) Sending PLAY request
Redistribute latency…
Progress: (request) Sent PLAY request
WARNING: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source: Could not read from resource.
Additional debug info:
…/gst/rtsp/gstrtspsrc.c(5964): gst_rtspsrc_reconnect (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source:
Could not receive any UDP packets for 5.0000 seconds, maybe your firewall is blocking it. Retrying using a tcp connection.
Redistribute latency…
Failed to query video capabilities: Inappropriate ioctl for device
Redistribute latency…
19:48:57. / 99:99:99.

My pipeline is using the baseline code of this repo here for face detetction:

when it try to run the rtsp video on just this above repo which only detects faces, the rtsp stream works fine, but on my pipeline which includes PGIE for face detection, SGIE1 for emotion detection, SGIE2 for gaze prediction, CSV logging for per-second data, and outputs the processed video to a file, the rtsp does not works on this here.

Kindly help me out here to debug this problem here.

@fanzh waiting for your guidance here.

from your last comment, using to uridecodebin to input the RTSP stream works. Since deepstream.txt is a custom code, please simplfy the code to narrow down this issue. Here are the steps:

  1. if removing sgie1 and sgie2, will the code run well? is the output video fine?
  2. if yes in the step1, if only removing sgie2, are the results of pgie and sgie1 correct?
  3. if yes in the step2, the Aborted issue should be related to sgie2.
1 Like

Alright! i have two separate pipelines now one is working with Gaze prediction and one is working with Emotions Detection, i will test them both with RTSP sources and then i will let you know where the issue lies.

I just have one question here:
can this be the issue of the annotated video output saving or the csv logs saving, if yes then i should also check the pipeline with saving of the processed output video and csv logs.

@fanzh

So, @fanzh i checked both pipelines separately with source as a RTSP,
Emotion Detection works totally fine and saving the logs and output video as well, now the only issue lies with the Gaze Prediction Pipeline which is not working with the RTSP streams where as it is working fine with file sources separately and combined as well, but not with RTSP sources in both situations, so now what can be the reason and how can we resolve it?

@fanzh waiting for you guidance

here is the separate pipeline for the Gaze Prediction:

Gaze.txt (23.9 KB)

  1. do you mean pgie+sgie1 run well in one pipeline with RTSP source while pgie+sgie1+sgie2 can’t run in one pipeline with RTSP source? if yes, the issue should be related to sgie2. do you mean pgie+sgie1+sgie2 run well in one pipeline with the local file?
  2. is sgie2 dependent on the results of sgie1?
1 Like
  • yes, pgie+sgie1+sgie2 run well in one pipeline with the local file, but not on RTSP source.
  1. is sgie2 dependent on the results of sgie1? No the inference of the sgie1 and sgie2 depends upon the pgie.
  2. pgie+sgie1 = works with file and RTSP perfectly.
  3. pgie+sgie2 = works with file but not with RTSP.
  4. pgie_sgie1+sgie2 = works with file but not with RTSP. (combined pipeline).

From above debugging its clear the issue lies within the sgie2 which is Gaze Prediction:
but its works perfectly fine with the file source, even though im hosting the RTSP server locally and there is not any issue of buffering or network issue, but it is not working for RTSP streams.

@fanzh waiting for you guidance.

if using pgie+sgie2 with rtsp, are the results of pgie correct? can you see the bboxes and emotion labels?
if pgie+sgie2 with rtsp, can you see the bboxes? how did you know sgie2 can’t work?

1 Like
  • if using pgie+sgie2 with rtsp, are the results of pgie correct? can you see the bboxes and emotion labels?
    -yes everything seems fine.

  • if pgie+sgie2 with rtsp, can you see the bboxes? how did you know sgie2 can’t work?
    -No the pipeline does not even start.

  • here are the logs with RTSP:
    root@AAM-LAPTOP-027:/opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo-Face# python3 deepstream.py --source rtsp://localhost:8554/live.stream --config-infer-pgie config_infer_primary_yoloV8_face.txt --config-infer-sgie-gaze config_infer_secondary_gaze.txt
    DEBUG: CSV logging initialized to gaze_log_2025-02-18_13:57:59.csv
    /opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo-Face/deepstream.py:143: DeprecationWarning: Gst.Element.get_request_pad is deprecated
    streammux_sink_pad = streammux.get_request_pad(pad_name)
    Starting pipeline…
    Failed to query video capabilities: Inappropriate ioctl for device
    0:00:00.524485815 2102 0x564a7811aaa0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<sgie_gaze> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo-Face/gaze_estimation.onnx_b1_gpu0_fp16.engine
    Implicit layer support has been deprecated
    INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:327 [Implicit Engine Info]: layers num: 0

0:00:00.524612983 2102 0x564a7811aaa0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<sgie_gaze> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo-Face/gaze_estimation.onnx_b1_gpu0_fp16.engine
0:00:00.529914173 2102 0x564a7811aaa0 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<sgie_gaze> [UID 2]: Load new model:config_infer_secondary_gaze.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvTrackerParams::getConfigRoot()] !!![WARNING] File doesn’t exist. Will go ahead with default values
[NvMultiObjectTracker] Initialized
0:00:00.610261079 2102 0x564a7811aaa0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo-Face/yolov8n-face.onnx_b1_gpu0_fp32.engine
Implicit layer support has been deprecated
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:327 [Implicit Engine Info]: layers num: 0

0:00:00.610349349 2102 0x564a7811aaa0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo-Face/yolov8n-face.onnx_b1_gpu0_fp32.engine
0:00:00.614126259 2102 0x564a7811aaa0 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_yoloV8_face.txt sucessfully
Aborted (core dumped)

  • and here are the logs with file source:

root@AAM-LAPTOP-027:/opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo-Face# python3 deepstream.py --source file:///opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo-Face/video.mp4 --config-infer-pgie config_infer_primary_yoloV8_face.txt --config-infer-sgie-gaze config_infer_secondary_gaze.txt
DEBUG: CSV logging initialized to gaze_log_2025-02-18_13:33:17.csv
/opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo-Face/deepstream.py:143: DeprecationWarning: Gst.Element.get_request_pad is deprecated
streammux_sink_pad = streammux.get_request_pad(pad_name)
Starting pipeline…
Failed to query video capabilities: Inappropriate ioctl for device
0:00:00.266959539 1968 0x556d0f7d6db0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<sgie_gaze> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo-Face/gaze_estimation.onnx_b1_gpu0_fp16.engine
Implicit layer support has been deprecated
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:327 [Implicit Engine Info]: layers num: 0

0:00:00.267035327 1968 0x556d0f7d6db0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<sgie_gaze> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo-Face/gaze_estimation.onnx_b1_gpu0_fp16.engine
0:00:00.274455809 1968 0x556d0f7d6db0 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<sgie_gaze> [UID 2]: Load new model:config_infer_secondary_gaze.txt sucessfully
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvTrackerParams::getConfigRoot()] !!![WARNING] File doesn’t exist. Will go ahead with default values
[NvMultiObjectTracker] Initialized
0:00:00.314066440 1968 0x556d0f7d6db0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo-Face/yolov8n-face.onnx_b1_gpu0_fp32.engine
Implicit layer support has been deprecated
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:327 [Implicit Engine Info]: layers num: 0

0:00:00.314117286 1968 0x556d0f7d6db0 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo-Face/yolov8n-face.onnx_b1_gpu0_fp32.engine
0:00:00.316740470 1968 0x556d0f7d6db0 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 1]: Load new model:config_infer_primary_yoloV8_face.txt sucessfully
WARNING: gst-stream-error-quark: No decoder available for type ‘audio/mpeg, mpegversion=(int)4, framed=(boolean)true, stream-format=(string)raw, level=(string)2, base-profile=(string)lc, profile=(string)lc, codec_data=(buffer)1210, rate=(int)44100, channels=(int)2’. (6): …/gst/playback/gsturidecodebin.c(960): unknown_type_cb (): /GstPipeline:pipeline0/GstURIDecodeBin:source-bin-0000
Failed to query video capabilities: Inappropriate ioctl for device
DEBUG: No face detected this frame
DEBUG: No face detected this frame
DEBUG: No face detected this frame
DEBUG: No face detected this frame
DEBUG: No face detected this frame
DEBUG: FPS of stream 1: 32.34 (32.34)
DEBUG: FPS of stream 1: 29.75 (31.05)
DEBUG: FPS of stream 1: 29.78 (30.63)
DEBUG: FPS of stream 1: 29.77 (30.41)
DEBUG: FPS of stream 1: 29.76 (30.28)
DEBUG: FPS of stream 1: 29.78 (30.20)
nvstreammux: Successfully handled EOS for source_id=0
DEBUG: EOS
Stopping pipeline…
DEBUG: Writing 32 entries to CSV
CSV log saved to gaze_log_2025-02-18_13:33:17.csv
Video output saved to gaze_output_video_2025-02-18_13:33:17.mp4
Pipeline stopped

  • as you can see the sgie works fine with the file source but when i change the file source to RTSP it collapse and not even start with just this error: Aborted (core dumped)

@fanzh waiting for your guidance here

please add logs in sgie_gaze_pad_probe to check if sgie2 outputs buffers. if using pgie+sgie2+fakesink with RTSP, can the app run well? are the new logs in sgie_gaze_pad_probe printed?

1 Like

here is the sample output of the outputs buffers in sgie_gaze_pad_probe.
DEBUG: SGIE2 gaze model processing buffer for frame 5
DEBUG: SGIE2 output - Pitch: -0.19, Yaw: 6.54
DEBUG: SGIE2 gaze model processing buffer for frame 6
DEBUG: SGIE2 output - Pitch: -0.16, Yaw: 6.69
DEBUG: SGIE2 gaze model processing buffer for frame 7
DEBUG: SGIE2 output - Pitch: 0.97, Yaw: 6.83
DEBUG: SGIE2 gaze model processing buffer for frame 8
DEBUG: SGIE2 output - Pitch: 0.53, Yaw: 6.49
DEBUG: SGIE2 gaze model processing buffer for frame 9
DEBUG: SGIE2 output - Pitch: 0.16, Yaw: 6.51
DEBUG: SGIE2 gaze model processing buffer for frame 10
DEBUG: SGIE2 output - Pitch: -0.14, Yaw: 6.57
DEBUG: SGIE2 gaze model processing buffer for frame 11
DEBUG: SGIE2 output - Pitch: 0.12, Yaw: 6.78
DEBUG: SGIE2 gaze model processing buffer for frame 12
DEBUG: SGIE2 output - Pitch: 0.07, Yaw: 6.70
DEBUG: SGIE2 gaze model processing buffer for frame 13

and here is the simplified version of the pipeline using only pgie+sgie2+fakesink:

def main():
global csv_file, data_collection, second_based_predictions, video_start_time

Gst.init(None)
loop = GLib.MainLoop()

# Generate output filenames with timestamp
current_datetime = datetime.now(pakistan_timezone).strftime('%Y-%m-%d_%H:%M:%S')
csv_filename = f'gaze_log_{current_datetime}.csv'
video_filename = f'gaze_output_video_{current_datetime}.mp4'

# Initialize second-based predictions storage
second_based_predictions = {}

# Capture the time when the video starts processing
video_start_time = time.time()

print(f"DEBUG: CSV logging initialized to {csv_filename}")

# ------------------------------------------------
# Pipeline Setup
# ------------------------------------------------
pipeline = Gst.Pipeline()
if not pipeline:
    sys.stderr.write('ERROR: Failed to create pipeline\n')
    sys.exit(1)

# Create elements
streammux = Gst.ElementFactory.make('nvstreammux', 'streammux')
source_bin = create_uridecode_bin(0, SOURCE, streammux)
pgie = Gst.ElementFactory.make('nvinfer', 'pgie')
sgie_gaze = Gst.ElementFactory.make('nvinfer', 'sgie_gaze')
fakesink = Gst.ElementFactory.make('fakesink', 'fakesink')

# Check element creation
elements = [streammux, source_bin, pgie, sgie_gaze, fakesink]
for element in elements:
    if not element:
        sys.stderr.write(f'ERROR: Failed to create {element.name} element\n')
        sys.exit(1)

# Add elements to pipeline
for element in elements:
    pipeline.add(element)

# ------------------------------------------------
# Configure Elements
# ------------------------------------------------
streammux.set_property('batch-size', STREAMMUX_BATCH_SIZE)
streammux.set_property('width', STREAMMUX_WIDTH)
streammux.set_property('height', STREAMMUX_HEIGHT)
streammux.set_property('batched-push-timeout', 25000)
streammux.set_property('live-source', 1 if 'rtsp://' in SOURCE else 0)

pgie.set_property('config-file-path', CONFIG_INFER_PGIE)
sgie_gaze.set_property('config-file-path', CONFIG_INFER_SGIE_GAZE)

# ------------------------------------------------
# Link Pipeline Elements
# ------------------------------------------------
streammux.link(pgie)
pgie.link(sgie_gaze)
sgie_gaze.link(fakesink)  # Link PGIE -> SGIE2 -> Fakesink for testing

# ------------------------------------------------
# Add Probes
# ------------------------------------------------
sgie_gaze_src_pad = sgie_gaze.get_static_pad('src')
sgie_gaze_src_pad.add_probe(Gst.PadProbeType.BUFFER, sgie_gaze_pad_probe, 0)

# ------------------------------------------------
# Start Pipeline
# ------------------------------------------------
bus = pipeline.get_bus()
bus.add_signal_watch()
bus.connect('message', bus_call, loop)

print("Starting pipeline...")
pipeline.set_state(Gst.State.PLAYING)

try:
    loop.run()
except KeyboardInterrupt:
    print("\nPipeline interrupted")
finally:
    print("Stopping pipeline...")
    pipeline.set_state(Gst.State.NULL)

    # Ensure collected data is written to CSV
    if second_based_predictions:
        print(f"DEBUG: Writing {len(second_based_predictions)} entries to CSV")

        try:
            with open(csv_filename, mode='w', newline='', encoding='utf-8') as file:
                csv_writer = csv.DictWriter(file, fieldnames=['Timestamp', 'Pitch', 'Yaw', 'Attention_Status'])
                csv_writer.writeheader()
                csv_writer.writerows(second_based_predictions.values())  # Write one row per second
            print(f"CSV log saved to {csv_filename}")
        except Exception as e:
            print(f"Error saving CSV: {str(e)}")
    else:
        print("WARNING: No data collected, CSV will remain empty")

    print(f"Video output saved to {video_filename}")
    print("Pipeline stopped")

and these are the results after doing this pgie+sgie2+fakesink:
Aborted (core dumped)

as you can see even though simplifying the pipeline still im getting the same results here.
@fanzh

@fanzh waiting for your guidance here

Seems the app did not print “DEBUG: SGIE2 output” after frame 13. please check if the app crashed in sgie_gaze_pad_probe. you can empty sgie_gaze_pad_probe then add one line log in sgie_gaze_pad_probe. then check if the app can run well.

1 Like

No, i only provided you the sample of logs, as the logs can be so much lengthy the logs are perfect.
DEBUG: SGIE2 gaze model processing buffer for frame 930
DEBUG: SGIE2 output - Pitch: 4.49, Yaw: -0.06
DEBUG: SGIE2 gaze model processing buffer for frame 931
DEBUG: SGIE2 output - Pitch: 5.54, Yaw: 0.09
DEBUG: SGIE2 gaze model processing buffer for frame 932
DEBUG: SGIE2 output - Pitch: 4.99, Yaw: -0.61
DEBUG: SGIE2 gaze model processing buffer for frame 933
DEBUG: SGIE2 output - Pitch: 4.95, Yaw: 0.54
DEBUG: SGIE2 gaze model processing buffer for frame 934
DEBUG: SGIE2 output - Pitch: 5.42, Yaw: 2.03
DEBUG: SGIE2 gaze model processing buffer for frame 935
DEBUG: SGIE2 output - Pitch: 4.79, Yaw: 2.53
nvstreammux: Successfully handled EOS for source_id=0
DEBUG: SGIE2 gaze model processing buffer for frame 936
DEBUG: SGIE2 output - Pitch: 4.97, Yaw: 0.56
DEBUG: EOS
Stopping pipeline…
DEBUG: Writing 6 entries to CSV
CSV log saved to gaze_log_2025-02-18_14:54:31.csv
Video output saved to gaze_output_video_2025-02-18_14:54:31.mp4
Pipeline stopped

@fanzh