Deepstream-imagedata-multistream only single stream working

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Deepstream version - 6.1.1 docker container

I am using a custom yolonas model for inferencing.
I am testing the deepstream-imagedata-multistream app.
With my custom model if I have one stream it is working fine.
But if I add another stream the code gets stuck . Logs are as follows :

Frames will be saved in frames
Creating Pipeline

Creating streamux

Creating source_bin 0

Creating source bin
Creating source_bin 1

Creating source bin
Creating Pgie

Creating nvvidconv1

Creating filter1

Creating tiler

Creating nvvidconv

Creating nvosd

Creating EGLSink

WARNING: Overriding infer-config batch-size 1 with number of sources 2

Adding elements to Pipeline

Linking elements in the Pipeline

Now playing…
1 : file:///videos/ppe.h264
2 : file:///videos/ppe.h264
Starting pipeline

WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:02.348486284 259 0x2dc5590 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/models/yolo_nas_l.trt
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x640x640
1 OUTPUT kFLOAT boxes 8400x4
2 OUTPUT kFLOAT scores 8400x1
3 OUTPUT kFLOAT classes 8400x1

0:00:02.373039611 259 0x2dc5590 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1841> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:00:02.373131807 259 0x2dc5590 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2018> [UID = 1]: deserialized backend context :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/models/yolo_nas_l.trt failed to match config params, trying rebuild
0:00:02.385029936 259 0x2dc5590 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.

Building the TensorRT Engine

It gets stuck at building tensorRt engine.
But the same code with the defualt config file works fine even with multistream .

My config file is as follows :


my batch size was 1. i exported to onnx with batch size of 2 it works now

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.