Error reported after replacing model in deepstream-test3 with yolov5 model

  • deepstream-app version 6.1.0
  • DeepStreamSDK 6.1.0
  • CUDA Driver Version: 11.4
  • CUDA Runtime Version: 11.0
  • TensorRT Version: 8.2
  • cuDNN Version: 8.4
  • libNVWarp360 Version: 2.0.1d3
  1. Problem: Error reported after replacing the model in deepstream-test3 with a yolov5 model.
  2. Test: No error when there are 1-2 video input sources for one input.When there are more than three video input sources, it starts to report an error.
  3. Error detail:
0   INPUT  kFLOAT data            3x640x640       
1   OUTPUT kINT32 num_detections  0               
2   OUTPUT kFLOAT nmsed_boxes     300x4           
3   OUTPUT kFLOAT nmsed_scores    300             
4   OUTPUT kFLOAT nmsed_classes   300             

0:00:03.512921561 153578 0x55e237fe84a0 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/fast-cnn-train/CLionProjects/test3/cmake-build-debug/model_b2_gpu0_fp32.engine
0:00:03.542082550 153578 0x55e237fe84a0 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1832> [UID = 1]: Backend has maxBatchSize 2 whereas 3 has been requested
0:00:03.542375572 153578 0x55e237fe84a0 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2009> [UID = 1]: deserialized backend context :/home/fast-cnn-train/CLionProjects/test3/cmake-build-debug/model_b2_gpu0_fp32.engine failed to match config params, trying rebuild
0:00:03.566275458 153578 0x55e237fe84a0 INFO                 nvinfer gstnvinfer.cpp:646:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1914> [UID = 1]: Trying to create engine from model files
YOLO config file or weights file is not specified

ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:723 Failed to create network using custom network creation function
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:789 Failed to get cuda engine from custom library API
0:00:04.072920225 153578 0x55e237fe84a0 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1934> [UID = 1]: build engine file failed
0:00:04.092527932 153578 0x55e237fe84a0 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 1]: build backend context failed
0:00:04.092566347 153578 0x55e237fe84a0 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 1]: generate backend failed, check config file settings
0:00:04.092609720 153578 0x55e237fe84a0 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-nvinference-engine> error: Failed to create NvDsInferContext instance
0:00:04.092623280 153578 0x55e237fe84a0 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<primary-nvinference-engine> error: Config file path: config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
ERROR from element primary-nvinference-engine: Failed to create NvDsInferContext instance
Error details: gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:dstest3-pipeline/GstNvInfer:primary-nvinference-engine:
Config file path: config_infer_primary.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running...
Returned, stopping playback
Deleting pipeline

Process finished with exit code 0

I solved the problem after regenerating the engine through nvdsinfer_custom_impl_Yolo.so and the cfg and wts files, but I don’t know why.

Could you share your different config files?

config_infer_primary.txt:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
custom-network-config=yolov5s.cfg
model-file=yolov5s.wts
model-engine-file=model_b4_gpu0_fp16.engine
#int8-calib-file=calib.table
labelfile-path=labels.txt
batch-size=4
network-mode=2
num-detected-classes=80
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=4
maintain-aspect-ratio=0
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
engine-create-func-name=NvDsInferYoloCudaEngineGet

[class-attrs-all]
pre-cluster-threshold=0

This is a four-way video input, regenerate the engine model by modifying the configuration file batch-size=4.
Load the cfg and wts files in the whole project.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

You own model may not support dynamic batchsize setting. We sugguest setting the batch-size to the number of stream source.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.