NVDS python example not working

• Hardware Platform (GPU) GeForce RTX 3090
• DeepStream Version 6.1.0
• TensorRT Version 8.4.1.5
• NVIDIA GPU Driver Version (valid for GPU only) 510.73.05
• Issue Type( questions, new requirements, bugs) question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) Stated below

Hello,
I’ve come across an issue where I cant seem to get buffer data through a probe. Encountering this issue I thought that the best thing to do is to run one of the examples given in the Deepstream Python Apps repo.

Running the command as instructed in the tutorial gave me this:

wsadmin@AIML1001:/home/mher/projects/PromethephzPeople/deepstream-analytics/deepstream_python_apps/apps/deepstream-imagedata-multistream$ sudo python deepstream_imagedata-multistream.py file:///home/mher/projects/PromethephzPeople/samples/cam14_1.h264 /home/mher/projects/PromethephzPeople/samples/capped_frames/
Frames will be saved in  /home/mher/projects/PromethephzPeople/samples/capped_frames/
Creating Pipeline

Creating streamux

Creating source_bin  0

Creating source bin
source-bin-00
Creating Pgie

Creating nvvidconv1

Creating filter1

Creating tiler

Creating nvvidconv

Creating nvosd

Creating EGLSink

Adding elements to Pipeline

Linking elements in the Pipeline

Now playing...
1 :  file:///home/mher/projects/PromethephzPeople/samples/cam14_1.h264
Starting pipeline

0:00:01.107233183 68034      0x2571270 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640
1   OUTPUT kFLOAT conv2d_bbox     16x23x40
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:01.164892984 68034      0x2571270 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2003> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:01.166273053 68034      0x2571270 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/home/mher/projects/PromethephzPeople/deepstream-analytics/deepstream_python_apps/apps/deepstream-imagedata-multistream/dstest_imagedata_config.txt sucessfully
Decodebin child added: source

Decodebin child added: decodebin0

Decodebin child added: h264parse0

Decodebin child added: capsfilter0

Decodebin child added: nvv4l2decoder0

In cb_newpad

Frame Number= 0 Number of Objects= 4 Vehicle_count= 0 Person_count= 4
0:00:01.308843623 68034      0x1c72640 WARN                 nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:01.308853863 68034      0x1c72640 WARN                 nvinfer gstnvinfer.cpp:2299:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason not-negotiated (-4)
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2299): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason not-negotiated (-4)
Frame Number= 1 Number of Objects= 4 Vehicle_count= 0 Person_count= 4
Exiting app



I’ve only modified the config to replace relative pathing to absolute pathing when supplying model locations:

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2020-2021 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

# Following properties are mandatory when engine files are not specified:
#   int8-calib-file(Only in INT8)
#   Caffemodel mandatory properties: model-file, proto-file, output-blob-names
#   UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
#   ONNX: onnx-file
#
# Mandatory properties for detectors:
#   num-detected-classes
#
# Optional properties for detectors:
#   cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
#   custom-lib-path
#   parse-bbox-func-name
#
# Mandatory properties for classifiers:
#   classifier-threshold, is-classifier
#
# Optional properties for classifiers:
#   classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
#   operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
#   input-object-min-width, input-object-min-height, input-object-max-width,
#   input-object-max-height
#
# Following properties are always recommended:
#   batch-size(Default=1)
#
# Other optional properties:
#   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
#   model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
#   mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
#   custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel
proto-file=/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/labels.txt
int8-calib-file=/opt/nvidia/deepstream/deepstream-6.1/samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
process-mode=1
model-color-format=0
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
## 0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)
cluster-mode=1

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.7
minBoxes=1

#Use the config params below for dbscan clustering mode
[class-attrs-all]
detected-min-w=4
detected-min-h=4
minBoxes=3

## Per class configurations
[class-attrs-0]
pre-cluster-threshold=0.05
eps=0.7
dbscan-min-score=0.95

[class-attrs-1]
pre-cluster-threshold=0.05
eps=0.7
dbscan-min-score=0.5

[class-attrs-2]
pre-cluster-threshold=0.1
eps=0.6
dbscan-min-score=0.95

[class-attrs-3]
pre-cluster-threshold=0.05
eps=0.7
dbscan-min-score=0.5

Any idea on what is happening and how I can fix this?
I’ve also tried running this with an mp4 file (the one that comes with deepstream examples), same issue.

The “not negotiated” happens from the lack of videoconvert elements from my experience, but since this is a sample app I decided not to change anything yet and wait for your input

You can change the sink from nveglglessink to fakesink. if you want to check the output, make sure you have monitor connected if possible, another way is you can consider modify to save to file.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.