Deepstream-imagedata-multistream only single stream working

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Deepstream version - 6.1.1 docker container

I am using a custom yolonas model for inferencing.
I am testing the deepstream-imagedata-multistream app.
With my custom model if I have one stream it is working fine.
But if I add another stream the code gets stuck . Logs are as follows :

Frames will be saved in frames
Creating Pipeline

Creating streamux

Creating source_bin 0

Creating source bin
source-bin-00
Creating source_bin 1

Creating source bin
source-bin-01
Creating Pgie

Creating nvvidconv1

Creating filter1

Creating tiler

Creating nvvidconv

Creating nvosd

Creating EGLSink

WARNING: Overriding infer-config batch-size 1 with number of sources 2

Adding elements to Pipeline

Linking elements in the Pipeline

Now playing…
1 : file:///videos/ppe.h264
2 : file:///videos/ppe.h264
Starting pipeline

WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:02.348486284 259 0x2dc5590 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/models/yolo_nas_l.trt
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x640x640
1 OUTPUT kFLOAT boxes 8400x4
2 OUTPUT kFLOAT scores 8400x1
3 OUTPUT kFLOAT classes 8400x1

0:00:02.373039611 259 0x2dc5590 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1841> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:00:02.373131807 259 0x2dc5590 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2018> [UID = 1]: deserialized backend context :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/models/yolo_nas_l.trt failed to match config params, trying rebuild
0:00:02.385029936 259 0x2dc5590 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.

Building the TensorRT Engine

It gets stuck at building tensorRt engine.
But the same code with the defualt config file works fine even with multistream .

My config file is as follows :
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/models/yolo_nas_l.onnx
model-engine-file=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/models/yolo_nas_l.trt
#int8-calib-file=calib.table
#labelfile-path=labels.txt
batch-size=1
network-mode=0
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=0
#workspace-size=2000
parse-bbox-func-name=NvDsInferParseYoloE
#parse-bbox-func-name=NvDsInferParseYoloECuda
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.1/sources/DeepStream-Yolo/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300

my batch size was 1. i exported to onnx with batch size of 2 it works now

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.