Python samples fail when running on sample inputs

• NVIDIA V100
• DeepStream5.0
• 7.0.0-1+cuda10.2
• 440.64

Trying to run deepstream_test_3.py and deepstream_imagedata-multistream.py inside official docker image, but getting the following error:

Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(1946): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)

Full output from deepstream_test_3.py (deepstream_imagedata-multistream.py is similar):

# python3 deepstream_test_3.py  file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_1080p_h264.mp4Creating Pipeline 
 
Creating streamux 
 
Creating source_bin  0  
 
Creating source bin
source-bin-00
Creating Pgie 
 
Creating tiler 
 
Creating nvvidconv 
 
Creating nvosd 
 
Creating EGLSink 

Warn: 'threshold' parameter has been deprecated. Use 'pre-cluster-threshold' instead.
Adding elements to Pipeline 

Linking elements in the Pipeline 

Now playing...
1 :  file:///opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_1080p_h264.mp4
Starting pipeline 

0:00:17.886642184  6470      0x1f808c0 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
Warning, setting batch size to 1. Update the dimension after parsing due to using explicit batch size.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Detected 1 inputs and 2 output network tensors.
0:03:00.120411592  6470      0x1f808c0 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1624> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine successfully
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [FullDims Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       min: 1x3x368x640     opt: 1x3x368x640     Max: 1x3x368x640     
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        min: 0               opt: 0               Max: 0               
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         min: 0               opt: 0               Max: 0               

0:03:01.240735837  6470      0x1f808c0 INFO                 nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:dstest3_pgie_config.txt sucessfully
Decodebin child added: source 

Decodebin child added: decodebin0 

Decodebin child added: qtdemux0 

Decodebin child added: multiqueue0 

Decodebin child added: h264parse0 

Decodebin child added: capsfilter0 

Decodebin child added: aacparse0 

Decodebin child added: avdec_aac0 

Decodebin child added: nvv4l2decoder0 

In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7f087e7beb28 (GstCapsFeatures at 0x7f076002dc20)>
In cb_newpad

gstname= audio/x-raw
Frame Number= 0 Number of Objects= 6 Vehicle_count= 4 Person_count= 2
0:03:03.598808665  6470      0x175c190 WARN                 nvinfer gstnvinfer.cpp:1946:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:03:03.598828499  6470      0x175c190 WARN                 nvinfer gstnvinfer.cpp:1946:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason not-negotiated (-4)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(1946): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)
Frame Number= 1 Number of Objects= 5 Vehicle_count= 3 Person_count= 2
Exiting app

Frame Number= 2 Number of Objects= 5 Vehicle_count= 3 Person_count= 2
Frame Number= 3 Number of Objects= 6 Vehicle_count= 4 Person_count= 2

Hi
V100 is compute card, you could change
sink = Gst.ElementFactory.make(“nveglglessink”, “nvvideo-renderer”)
to
sink = Gst.ElementFactory.make(“fakesink”, “nvvideo-renderer”)

Hi, thanks @Amycao, its working now!