Error running deepstream-imagedata-multistream with python

When I run the deepstream code related to the rtsp stream, the following errors will occur, but there is no problem with this rtsp stream using opencv, it feels that the pipeline is broken, so, is there any way or test case to make it decode all the time?

**PERF:  {'stream0': 0.0, 'stream1': 0.0}

Decodebin child added: decodebin0

Decodebin child added: rtph264depay0

Decodebin child added: h264parse0

Decodebin child added: capsfilter0

Decodebin child added: decodebin1

Decodebin child added: rtph264depay1

Decodebin child added: h264parse1

Decodebin child added: capsfilter1

Decodebin child added: decodebin2

Decodebin child added: decodebin3

Decodebin child added: rtppcmadepay0

Decodebin child added: nvv4l2decoder1

Decodebin child added: nvv4l2decoder0

Decodebin child added: rtppcmadepay1

Decodebin child added: alawdec0

Decodebin child added: alawdec1

In cb_newpad

In cb_newpad

In cb_newpad

In cb_newpad

Frame Number= 0 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 0 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
0:00:19.303449400 13201      0x16dc700 WARN                 nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:19.303491560 13201      0x16dc700 WARN                 nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason not-negotiated (-4)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2299): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)
Exiting app

Please provide complete information as applicable to your setup and commands to reproduce this issue, thanks.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

ok, I use the docker image of deepstream trion 6.1 provided by you, and the rtsp tested is the intranet stream:

python3 deepstream_imagedata-multistream.py rtsp://admin:one2021@192.168.1.177/Streaming/Channels/1 rtsp://admin:one2021@192.168.1.177/Streaming/Channels/1 frame

I don’t know if this problem has occurred on your side, the test file is not modified, same as deepstream-test1, when I use /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_720p.h264 for The above problem is also the same when testing, this can be reproduced, as follows

root@PowerEdge-R740:/home/Download/deepstream_python_apps/apps/deepstream-test1# python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_720p.h264
Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

Creating EGLSink

Playing file /opt/nvidia/deepstream/deepstream-6.1/samples/streams/sample_720p.h264
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

0:00:00.254442485 13341      0x4499240 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:01.785931888 13341      0x4499240 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640
1   OUTPUT kFLOAT conv2d_bbox     16x23x40
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:01.814187818 13341      0x4499240 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_fp16.engine
0:00:01.814894961 13341      0x4499240 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Frame Number=0 Number of Objects=10 Vehicle_count=6 Person_count=4
0:00:01.962961910 13341      0x3502ea0 WARN                 nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:01.962979775 13341      0x3502ea0 WARN                 nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason not-negotiated (-4)
Frame Number=1 Number of Objects=11 Vehicle_count=7 Person_count=4
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2299): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)
root@PowerEdge-R740:/home/Download/deepstream_python_apps/apps/deepstream-test1#

Please provide on which platform you are running.
You issue is related with EGLSink

The platform I am running on is
nvcr.io/nvidia/deepstream:6.1-triton, I haven’t touched the mirroring environment inside, maybe you need more detailed mirroring internal environment? But I don’t map any files, it’s all based on it’s own.

Also, is EGLSink unset display? I saw this mentioned in How do I turn EGL Display off?, but I don’t think my problem is the same as his, at least I set the unset display problem and the problem is not solved, and the error message is not this keyword

https://catalog.ngc.nvidia.com/orgs/nvidia/containers/deepstream,my platform

You are running on dGPU or Jetson device?

no, it’s on a server, no desktop, rtx 5000 accelerator card

sink = Gst.ElementFactory.make(“nveglglessink”, “nvvideo-renderer”)
change to
sink = Gst.ElementFactory.make(“fakesink”, “nvvideo-renderer”)

no problem, thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.