Deserialized backend context model.etlt_b16_gpu0_fp16.engine failed to match config params

I have trained a custom model in nvidia tao and tried to integrate with the deepstream python test2.py, I have edited only the pgie1 and tried to give the custom model path and cfg file. It shows the following error.

I have gone through a similar issue, the answers says check the config file, what is the config file I have to check?


0:00:20.849281607   628      0x1c42610 WARN                 nvinfer gstnvinfer.cpp:643:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1856> [UID = 2]: backend can not support dims:224x224x3
0:00:20.849307852   628      0x1c42610 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1964> [UID = 2]: deserialized backend context :/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/deploy/model.etlt_b16_gpu0_fp16.engine failed to match config params
0:00:20.883138286   628      0x1c42610 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2020> [UID = 2]: build backend context failed
0:00:20.883175907   628      0x1c42610 ERROR                nvinfer gstnvinfer.cpp:640:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1257> [UID = 2]: generate backend failed, check config file settings
0:00:20.883201356   628      0x1c42610 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<secondary1-nvinference-engine> error: Failed to create NvDsInferContext instance
0:00:20.883220708   628      0x1c42610 WARN                 nvinfer gstnvinfer.cpp:846:gst_nvinfer_start:<secondary1-nvinference-engine> error: Config file path: dstest2_sgie1_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): gstnvinfer.cpp(846): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:secondary1-nvinference-engine:
Config file path: dstest2_sgie1_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

PGIE File

[property]
gpu-id=0
net-scale-factor=1
#model-engine-file=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/deploy/model.engine
tlt-encoded-model=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/deploy/model.etlt
tlt-model-key=password
uff-input-blob-name=input_1
uff-input-dims=3;224;224;1
int8-calib-file=/opt/nvidia/deepstream/deepstream-6.1/sources/deepstream_python_apps/apps/deepstream-test2-new/deploy/calib.bin
network-input-order=1
#infer-dims=3;224;224
batch-size=16
network-mode=2
num-detected-classes=2
input-object-min-width=10
input-object-min-height=10
model-color-format=1
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
output-blob-names=predictions/Softmax
classifier-async-mode=0
classifier-threshold=0.01
process-mode=2
#scaling-filter=0
#scaling-compute-hw=0

I am sorry to tag you guys, but I need help from you guys.
@yuweiw @yingliu

The input dims parameter of your model doesn’t match to the para in your config file. You can check your own models input dim paras.

what would be the ideal input dims?
@yuweiw

It’s your trained custom model’s input dims.

I am using a resnet 18 model with the same dimension. @yuweiw

model_config {
  # Model Architecture can be chosen from:
  # ['resnet', 'vgg', 'googlenet', 'alexnet']
  arch: "resnet"
  # for resnet --> n_layers can be [10, 18, 50]
  # for vgg --> n_layers can be [16, 19]
  n_layers: 18
  use_batch_norm: True
  use_bias: False
  all_projections: False
  use_pooling: True
  retain_head: True
  resize_interpolation_method: BICUBIC
  # if you want to use the pretrained model,
  # image size should be "3,224,224"
  # otherwise, it can be "3, X, Y", where X,Y >= 16
  input_image_size: "3,224,224"
}

Close this topic, keep the discussion in Classifier_meta_list is none in deepstream_test2.py - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums.