Deepstream 6.1 imagedata multistream not working

• Hardware Platform (Jetson / GPU) → NVIDIA GeForce GTX 1650
• DeepStream Version → 6.1
• JetPack Version (valid for Jetson only) NA
• TensorRT Version → TensorRT 8.2.5.1
• NVIDIA GPU Driver Version (valid for GPU only) NVIDIA driver 515
• Issue Type( questions, new requirements, bugs) bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I’ve installed the python bindings and I’m trying to run the “deepstream-imagedata-multistream” sample. I have not modified any configurations or any code from the python sample.

How to run:

python3 deepstream_imagedata-multistream.py file:///home/yousef/Data/NYUSA.mp4 file:///home/yousef/Data/NYUSA2.mp4 frames

output:

Creating Pipeline 
 
Creating streamux 
 
Creating source_bin  0  
 
Creating source bin
source-bin-00
Creating source_bin  1  
 
Creating source bin
source-bin-01
Creating Pgie 
 
Creating nvvidconv1 
 
Creating filter1 
 
Creating tiler 
 
Creating nvvidconv 
 
Creating nvosd 
 
Creating EGLSink 

WARNING: Overriding infer-config batch-size 1  with number of sources  2  

Adding elements to Pipeline 

Linking elements in the Pipeline 

Now playing...
1 :  file:///home/yousef/Data/NYUSA.mp4
2 :  file:///home/yousef/Data/NYUSA2.mp4
Starting pipeline 

libEGL warning: MESA-LOADER: failed to retrieve device information

libEGL warning: DRI2: could not open /dev/dri/card0 (No such file or directory)
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:03.539169665   129      0x45af130 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:03.554188074   129      0x45af130 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1832> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:00:03.554213395   129      0x45af130 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2009> [UID = 1]: deserialized backend context :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed to match config params, trying rebuild
0:00:03.555814813   129      0x45af130 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
0:00:30.555809513   129      0x45af130 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1946> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b2_gpu0_int8.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:30.574220215   129      0x45af130 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest_imagedata_config.txt sucessfully
Decodebin child added: source 

Decodebin child added: decodebin0 

Decodebin child added: source 

Decodebin child added: decodebin1 


**PERF:  {'stream0': 0.0, 'stream1': 0.0} 

Decodebin child added: qtdemux0 

Decodebin child added: qtdemux1 

Decodebin child added: multiqueue0 

Decodebin child added: multiqueue1 

Decodebin child added: h264parse0 

Decodebin child added: h264parse1 

Decodebin child added: capsfilter0 

Decodebin child added: capsfilter1 

Decodebin child added: aacparse0 

Decodebin child added: aacparse1 

Decodebin child added: avdec_aac1 

Decodebin child added: avdec_aac0 

Decodebin child added: nvv4l2decoder0 
Decodebin child added: nvv4l2decoder1 


In cb_newpad

In cb_newpad

In cb_newpad

In cb_newpad

cuGraphicsGLRegisterBuffer failed with error(219) gst_eglglessink_cuda_init texture = 1
Frame Number= 0 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 0 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
0:00:30.864581013   129      0x3889b60 WARN                 nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:30.864604802   129      0x3889b60 WARN                 nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason not-negotiated (-4)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2299): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)
Exiting app

Frame Number= 1 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 1 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 2 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 2 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 3 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 3 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 4 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 4 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 5 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 5 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 6 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 6 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 7 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
Frame Number= 7 Number of Objects= 0 Vehicle_count= 0 Person_count= 0

Also nvidia-smi reports cuda version 11.7

How can I overcome this error? Is is due to driver and cuda versions? If so how can I safely uninstall my current versions and upgrade to the required version.

Please use cuda 11.6
Which GPU you are using?

NVIDIA GeForce GTX 1650

Hello, I have exactly the same problem, my gpu is a Quadro T2000, with cuda 11.7 and driver 515.43.04.

Hello, the problem in my case was that I was not using the OpenGL libs of NVIDIA. you can verify with:

glxinfo | grep OpenGL

The output should be this:

OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: Quadro T2000 with Max-Q Design/PCIe/SSE2
OpenGL core profile version string: 4.6.0 NVIDIA 515.43.04
OpenGL core profile shading language version string: 4.60 NVIDIA
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 4.6.0 NVIDIA 515.43.04
OpenGL shading language version string: 4.60 NVIDIA
OpenGL context flags: (none)
OpenGL profile mask: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 515.43.04
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
OpenGL ES profile extensions:

If not do this:

sudo prime-select nvidia
sudo reboot

By the way, I downgraded my cuda to 11.4 just in case.

2 Likes

What is the version of your DeepStream?

Deepstream 6.0.1

Worked like a charm, thank you

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.