The Python-deepStream API implements YOLOV4,Multichannel RTSP video stream inference cannot run

My computer
Operating system:ubuntu20.04
CPU:i9 Intel(R) Core™ i9-12900KF
GPU:NVIDIA 3090 24G video memory
Deepstream6.1 was installed successfully.
Python successfully bound DeepStream.

git clone GitHub - NVIDIA-AI-IOT/yolov4_deepstream
Successfully compiled, ready to run。
import to python interface, use deepstream-test3, run one RTSP stream can run, but run two RTSP appear error.
In the deepstream-test3 folder deepstream-test3.py file
Modify
< pgie.set_property(‘config-file-path’, “config_infer_primary_yoloV4.txt” >

python3 deepstream-test3.py -i rtsp://admin:CDWAPM@192.168.1.9/h264/ch1/main/av_stream -s
no problem and can run

but
python3 deepstream-test3.py -i rtsp://admin:CDWAPM@192.168.1.9/h264/ch1/main/av_stream rtsp://admin:FPTKSH@192.168.1.98/h264/ch1/main/av_stream -s
problem:
0:00:09.288897521 97120 0x29c0900 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1832> [UID = 1]: Backend has maxBatchSize 1 whereas 2 has been requested
0:00:09.289061931 97120 0x29c0900 WARN nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2009> [UID = 1]: deserialized backend context :/home/ai-box/deepstream/nvidia/deepstream/deepsream-6.1/sources/deepstream_yolov4/yolov4.engine failed to match config params, trying rebuild
0:00:09.297623267 97120 0x29c0900 INFO nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsIferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:860 failed to build network since there is no model file matched.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:799 failed to build network.
0:00:09.801857495 97120 0x29c0900 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInerContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:09.851086491 97120 0x29c0900 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInerContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:09.851121902 97120 0x29c0900 ERROR nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInerContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:09.851353192 97120 0x29c0900 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start: error: Failed to create NvDsInferContext nstance
0:00:09.851358835 97120 0x29c0900 WARN nvinfer gstnvinfer.cpp:846:gst_nvinfer_start: error: Config file path: config_infer_priary_yoloV4.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

**PERF: {‘stream0’: 0.0, ‘stream1’: 0.0}

Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-nference:
Config file path: config_infer_primary_yoloV4.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Exiting app

Hi @zhidengkui1981 , Does your model support dynamic batch size? You should set the batch size to 2, if you have 2 source. Your model should support 2 batch size input.

Thank you very much for your reply。
Modify <pgie.set_property(‘config-file-path’, “config_infer_primary_yoloV4.txt”> .
config_infer_primary_yoloV4.txt


Modify :
batch-size = 2.
It doesn’t work. It’s still the same problem

Did you modify the source code or just the config file? Could you attach your model and config file if you didn’t change source code? Thanks

Thank you very much.I have solved the problem, the main problem is the setting of batch-size when yolov4 weight file is transferred to TRT.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.