ERROR: failed to build network since there is no model file matched

• Hardware - Orin
• Network Type - Yolo_v4
• TLT Version - tlt-streamanalytics:v3.0-dp-py3

Hello,
During efforts to upgrade our product from Xavier NX to Orin NX, while our module engine was trained on deepstream 5.x + TensorRT 7.1.3, we are trying to convert the engine to work on deepstream 7.x + TensorRT 10.3.0.

To do that, we convert engine in these steps: tlt → hdf5 → onnx → engine

The convertion from tlt → .hdf5, is done with tlt-streamanalytics:v3.0-dp-py3 docker, and afterwards was exported to onnx file, and last was converted on the jetson with the trtexec

trying to run the pipeline with the new converted engine file, I get this error below.
please advise
Ziv

Error -
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
WARNING: [TRT]: BatchedNMSPlugin is deprecated since TensorRT 9.0. Use INetworkDefinition::addNMS() to add an INMSLayer OR use EfficientNMS plugin.
0:00:02.671018109 2518 0xaaaae4872a30 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/media/prog/home/mic-710aix/robot/src/detection_pipeline/models/yolov4_mobilenet_v2_epoch_018_200725.engine
Implicit layer support has been deprecated
INFO: [Implicit Engine Info]: layers num: 0

0:00:02.671147660 2518 0xaaaae4872a30 WARN nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:2026> [UID = 1]: Backend has maxBatchSize 1 whereas 4 has been requested
0:00:02.671175567 2518 0xaaaae4872a30 WARN nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2201> [UID = 1]: deserialized backend context :/media/prog/home/mic-710aix/robot/src/detection_pipeline/models/yolov4_mobilenet_v2_epoch_018_200725.engine failed to match config params, trying rebuild
0:00:02.680204881 2518 0xaaaae4872a30 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
ERROR: failed to build network since there is no model file matched.
ERROR: failed to build network.
0:00:05.228882154 2518 0xaaaae4872a30 ERROR nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2126> [UID = 1]: build engine file failed
0:00:05.547987072 2518 0xaaaae4872a30 ERROR nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2213> [UID = 1]: build backend context failed
0:00:05.548058119 2518 0xaaaae4872a30 ERROR nvinfer gstnvinfer.cpp:678:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1351> [UID = 1]: generate backend failed, check config file settings
0:00:05.548752076 2518 0xaaaae4872a30 WARN nvinfer gstnvinfer.cpp:914:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:05.548780175 2518 0xaaaae4872a30 WARN nvinfer gstnvinfer.cpp:914:gst_nvinfer_start: error: Config file path: /media/prog/home/mic-710aix/robot/src/detection_pipeline/config/mock_config_infer_yolov4.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

see attached the config file I used:
mock_config_infer_yolov4.txt (3.3 KB)

You can comment out the engine file set the onnx file in the config file and then retry.
See latest github in https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/master/configs/nvinfer/yolov4-tiny_tao/pgie_yolov4_tiny_tao_config.txt#L32.

For old version of tao .etlt file, you can directly config it.
See old version of github in https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/blob/release/tao3.0_ds6.2ga/configs/yolov4-tiny_tao/pgie_yolov4_tiny_tao_config.txt#L31-L32.