• Hardware Platform (Jetson / GPU) - Jetson Orin Nano Developer Kit
• DeepStream Version - Docker Container - deepstream:7.0-samples-multiarch
• JetPack Version - 6.0
• TensorRT Version - 8.6.2.3
• NVIDIA GPU Driver Version (valid for GPU only) -
• Issue Type (questions, new requirements, bugs) - Question / Bug
• How to reproduce the issue ?
Ran the docker container using:
sudo docker run --runtime=nvidia -it --rm --net=host --privileged -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-7.0 [nvcr.io/nvidia/deepstream:7.0-samples-multiarch](http://nvcr.io/nvidia/deepstream:7.0-samples-multiarch)
Cloned the deepstream_python_apps
repo
cd /opt/nvidia/deepstream/deepstream-7.0/sources
git clone https://github.com/NVIDIA-AI-IOT/deepstream_python_apps
Built the python bindings using the instructions shared in the bindings’s README file
cd /opt/nvidia/deepstream/deepstream-6.4/sources/deepstream_python_apps/bindings
For my current application, the sample python app deepstream-imagedata-multistream
would suit it the most. And I wanted the application to detect humans, so I used the Pretrained TAO model Peoplenet
.
cd sources/deepstream_python_apps/apps/deepstream-imagedata-multistream
Used the TAO Pretrained model (peoplenet) and created the model engine files using trtexec.
cd /usr/src/tensorrt/bin
trtexec --onnx=/opt/nvidia/deepstream/deepstream-7.0/samples/models/tao_pretrained_models/peopleNet/resnet34_peoplenet_int8.onnx --saveEngine=/opt/nvidia/deepstream/deepstream-7.0/samples/models/tao_pretrained_models/peopleNet/resnet34_peoplenet_int8.engine
Here’s my dstest_imagedata_config.txt file:
# Following properties are mandatory when engine files are not specified:
# int8-calib-file(Only in INT8)
# Caffemodel mandatory properties: model-file, proto-file, output-blob-names
# UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
# ONNX: onnx-file
#
# Mandatory properties for detectors:
# num-detected-classes
#
# Optional properties for detectors:
# cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
# custom-lib-path,
# parse-bbox-func-name
#
# Mandatory properties for classifiers:
# classifier-threshold, is-classifier
#
# Optional properties for classifiers:
# classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
# operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
# input-object-min-width, input-object-min-height, input-object-max-width,
# input-object-max-height
#
# Following properties are always recommended:
# batch-size(Default=1)
#
# Other optional properties:
# net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
# model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
# mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
# custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.
[property]
gpu-id=0
net-scale-factor=0.00392156862745098
offsets=0.0;0.0;0.0
maintain-aspect-ratio=0
tlt-model-key=tlt_encode
tlt-encoded-model=/opt/nvidia/deepstream/deepstream-7.0/samples/models/tao_pretrained_models/peopleNet/resnet34_peoplenet_int8.onnx
model-engine-file=/opt/nvidia/deepstream/deepstream-7.0/samples/models/tao_pretrained_models/peopleNet/resnet34_peoplenet_int8.engine
labelfile-path=/opt/nvidia/deepstream/deepstream-7.0/samples/models/tao_pretrained_models/peopleNet/labels.txt
# int8-calib-file=../../../../samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=30
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=1
network-type=0
num-detected-classes=3
interval=0
gie-unique-id=1
uff-input-order=0
uff-input-blob-name=input_1
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
#scaling-filter=0
#scaling-compute-hw=0
cluster-mode=2
infer-dims=3;544;960
output-tensor-meta=0
[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1
I modified the python file to contain 3 labels as used by Peoplenet. No other modifications to deepstream_imagedata-multistream.py were made.
Now, when I run the application it works well for a single stream, but when I add more than one stream, it crashes:
For a single stream
python3 mod_deepstream_imagedata-multistream.py rtsp_link1 frames
Frames will be saved in frames
Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating Pgie
Creating nvvidconv1
Creating filter1
Creating tiler
Creating nvvidconv
Creating nvosd
Is it Integrated GPU? : 1
Creating nv3dsink
Atleast one of the sources is live
WARNING: Overriding infer-config batch-size 30 with number of sources 1
Adding elements to Pipeline
Linking elements in the Pipeline
Now playing…
1 : rtsp_link1
Starting pipeline
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:06.844761407 151 0xaaab4d54e6a0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.0/samples/models/tao_pretrained_models/peopleNet/resnet34_peoplenet_int8.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1:0 3x544x960
1 OUTPUT kFLOAT output_cov/Sigmoid:0 3x34x60
2 OUTPUT kFLOAT output_bbox/BiasAdd:0 12x34x60
ERROR: [TRT]: 3: Cannot find binding of given name: output_cov/Sigmoid
0:00:07.242181049 151 0xaaab4d54e6a0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:2062> [UID = 1]: Could not find output layer ‘output_cov/Sigmoid’ in engine
ERROR: [TRT]: 3: Cannot find binding of given name: output_bbox/BiasAdd
0:00:07.242240026 151 0xaaab4d54e6a0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:2062> [UID = 1]: Could not find output layer ‘output_bbox/BiasAdd’ in engine
0:00:07.242256379 151 0xaaab4d54e6a0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2198> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.0/samples/models/tao_pretrained_models/peopleNet/resnet34_peoplenet_int8.engine
0:00:07.260719379 151 0xaaab4d54e6a0 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 1]: Load new model:dstest_imagedata_config.txt sucessfully
Decodebin child added: source
- *PERF: {‘stream0’: 0.0}
Decodebin child added: decodebin0
Decodebin child added: rtppcmudepay0
Decodebin child added: mulawdec0
In cb_newpad
Decodebin child added: decodebin1
Decodebin child added: rtph265depay0
- *PERF: {‘stream0’: 0.0}
Decodebin child added: h265parse0
Decodebin child added: capsfilter0
Decodebin child added: nvv4l2decoder0
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 279
NvMMLiteBlockCreate : Block : BlockType = 279
In cb_newpad
Frame Number= 0 Number of Objects= 5 Person_count= 3 Face_count= 2
- *PERF: {‘stream0’: 0.0}
Frame Number= 1 Number of Objects= 5 Person_count= 3 Face_count= 2
Frame Number= 2 Number of Objects= 5 Person_count= 3 Face_count= 2
Frame Number= 3 Number of Objects= 5 Person_count= 3 Face_count= 2
…
For more than one stream:
python3 mod_deepstream_imagedata-multistream.py rtsp_link1 rtsp_link2 frames
Frames will be saved in frames
Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating source_bin 1
Creating source bin
source-bin-01
Creating Pgie
Creating nvvidconv1
Creating filter1
Creating tiler
Creating nvvidconv
Creating nvosd
Is it Integrated GPU? : 1
Creating nv3dsink
Atleast one of the sources is live
WARNING: Overriding infer-config batch-size 30 with number of sources 2
Adding elements to Pipeline
Linking elements in the Pipeline
Now playing…
1 : rtsp://10.226.52.222:554/h264/ch1/sub/av_stream
2 : rtsp://10.226.52.226:554/h264/ch1/sub/av_stream
Starting pipeline
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:06.917391091 206 0xaaab216c1c70 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2095> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.0/samples/models/tao_pretrained_models/peopleNet/resnet34_peoplenet_int8.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1:0 3x544x960
1 OUTPUT kFLOAT output_cov/Sigmoid:0 3x34x60
2 OUTPUT kFLOAT output_bbox/BiasAdd:0 12x34x60
0:00:07.311114935 206 0xaaab216c1c70 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:2027> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:00:07.311192729 206 0xaaab216c1c70 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2204> [UID = 1]: deserialized backend context :/opt/nvidia/deepstream/deepstream-7.0/samples/models/tao_pretrained_models/peopleNet/resnet34_peoplenet_int8.engine failed to match config params, trying rebuild
0:00:07.325072343 206 0xaaab216c1c70 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2109> [UID = 1]: Trying to create engine from model files
WARNING: INT8 calibration file not specified/accessible. INT8 calibration can be done through setDynamicRange API in ‘NvDsInferCreateNetwork’ implementation
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
ERROR: [TRT]: UffParser: Could not read buffer.
parseModel: Failed to parse UFF model
ERROR: Failed to build network, error in model parsing.
ERROR: Failed to create network using custom network creation function
ERROR: Failed to get cuda engine from custom library API
0:00:14.360852900 206 0xaaab216c1c70 ERROR nvinfer gstnvinfer.cpp:676:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2129> [UID = 1]: build engine file failed
Segmentation fault (core dumped)
Please let me know how I can fix this and run multiple streams simultaenously?