Network has dynamic or shape inputs, but no optimization profile has been defined

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4 Preview
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

Hello, I have been using the fp_lpd caffemodel from redaction app for detecting face/license plate since Deepstream 4.0 but with the recent upgrade of Deepstream 5.0, I am unable to use that caffemodel for inferencing as primary gie. here is the error:

0:00:00.269172961  3838   0x558db91e30 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
Warning, setting batch size to 1. Update the dimension after parsing due to using explicit batch size.
ERROR: [TRT]: Network has dynamic or shape inputs, but no optimization profile has been defined.
ERROR: [TRT]: Network validation failed.
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:20.940268384  3838   0x558db91e30 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 1]: build engine file failed
0:00:20.964720494  3838   0x558db91e30 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1697> [UID = 1]: build backend context failed
0:00:20.964763360  3838   0x558db91e30 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1024> [UID = 1]: generate backend failed, check config file settings
0:00:21.204716582  3838   0x558db91e30 WARN                 nvinfer gstnvinfer.cpp:781:gst_nvinfer_start:<primary_gie> error: Failed to create NvDsInferContext instance
0:00:21.204758405  3838   0x558db91e30 WARN                 nvinfer gstnvinfer.cpp:781:gst_nvinfer_start:<primary_gie> error: Config file path: /home/sigmind/WC-Tegra-DS-Jessore/.cfg/deepstream-app/pgie_config_fd_lpd.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED

@xhuv_NV

I think this error probably comes from TensorRT 7.1 which was installed alongside with DeepStream 5.0
Could you please share us your environment like:

  1. caffe model (prototxt file + caffemodel, prototxt file is minimum)
  2. configuration txt files of DeepStream

I have similar error but with sgie.

0:00:01.812365908  1402   0x5594021e30 INFO                 nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 2]: Trying to create engine from model files
Warning, setting batch size to 1. Update the dimension after parsing due to using explicit batch size.
ERROR: [TRT]: Network has dynamic or shape inputs, but no optimization profile has been defined.
ERROR: [TRT]: Network validation failed.
ERROR: Build engine failed from config file
ERROR: failed to build trt engine.
0:00:02.181316206  1402   0x5594021e30 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 2]: build engine file failed
0:00:02.186903954  1402   0x5594021e30 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1697> [UID = 2]: build backend context failed
0:00:02.186990515  1402   0x5594021e30 ERROR                nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger:<secondary1-nvinference-engine> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1024> [UID = 2]: generate backend failed, check config file settings
0:00:02.187100852  1402   0x5594021e30 WARN                 nvinfer gstnvinfer.cpp:781:gst_nvinfer_start:<secondary1-nvinference-engine> error: Failed to create NvDsInferContext instance
0:00:02.187174740  1402   0x5594021e30 WARN                 nvinfer gstnvinfer.cpp:781:gst_nvinfer_start:<secondary1-nvinference-engine> error: Config file path: dstest2_sgie1_config_vggface.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running...
ERROR from element secondary1-nvinference-engine: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(781): gst_nvinfer_start (): /GstPipeline:dstest2-pipeline/GstNvInfer:secondary1-nvinference-engine:
Config file path: dstest2_sgie1_config_vggface.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback
Deleting pipeline
  1. caffe model (prototxt file + caffemodel, prototxt file is minimum):
    Caffemodels are from GitHub - ox-vgg/vgg_face2
    resnet50_128.caffemodel and resnet50_128.prototxt

  2. configuration txt files of DeepStream:
    dstest2_sgie1_config_vggface.txt

     [property]
    
     gpu-id=0
    
     net-scale-factor=1
    
     model-engine-file=../../../../samples/models/Secondary_VGG/model/resnet50_128_caffe/resnet50_128.caffemodel_b16_gpu0_fp32.engine
    
     model-file=../../../../samples/models/Secondary_VGG/model/resnet50_128_caffe/resnet50_128.caffemodel
    
     proto-file=../../../../samples/models/Secondary_VGG/model/resnet50_128_caffe/resnet50_128.prototxt
    
     batch-size=16
    
     # 0=FP32 and 1=INT8 mode
    
     network-mode=0
    
     input-object-min-width=64
    
     input-object-min-height=64
    
     process-mode=2
    
     model-color-format=1
    
     gpu-id=0
    
     gie-unique-id=2
    
     operate-on-gie-id=1
    
     operate-on-class-ids=0
    
     output-blob-names=feat_extract
    
     classifier-async-mode=0
    
     # classifier-threshold=0.51
    
     ## 0=Detector, 1=Classifier, 2=Segmentation, 100=Other
    
     network-type=100
    
     # Enable tensor metadata output
    
     output-tensor-meta=1
    

@xhuv_NV

Could you please provide your model and configurations that ran successfully with DeepStream 4.0?

Yes, the model is fd_lpd.caffemodel with the following prototxt file:

    name: "FD_LPD_Redactor"
    input: "data"
    input_dim: 1
    input_dim: 3
    input_dim: 270
    input_dim: 480

    # **********************************************************
    # LPD Model resnet10
    # **********************************************************
    layer {
      name: "conv1_branch_1"
      type: "Convolution"
      bottom: "data"
      top: "conv1_branch_1"
      convolution_param {
        num_output: 64
        pad_h: 3
        pad_w: 3
        kernel_h: 7
        kernel_w: 7
        stride_h: 2
        stride_w: 2
      }
    }
    layer {
      name: "bn_conv1_branch_1"
      type: "Scale"
      bottom: "conv1_branch_1"
      top: "bn_conv1_branch_1"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "activation_1/Relu_branch_1"
      type: "ReLU"
      bottom: "bn_conv1_branch_1"
      top: "activation_1/Relu_branch_1"
    }
    layer {
      name: "block_1a_conv_1_branch_1"
      type: "Convolution"
      bottom: "activation_1/Relu_branch_1"
      top: "block_1a_conv_1_branch_1"
      convolution_param {
        num_output: 64
        pad_h: 1
        pad_w: 1
        kernel_h: 3
        kernel_w: 3
        stride_h: 2
        stride_w: 2
      }
    }
    layer {
      name: "block_1a_conv_shortcut_branch_1"
      type: "Convolution"
      bottom: "activation_1/Relu_branch_1"
      top: "block_1a_conv_shortcut_branch_1"
      convolution_param {
        num_output: 64
        pad_h: 0
        pad_w: 0
        kernel_h: 1
        kernel_w: 1
        stride_h: 2
        stride_w: 2
      }
    }
    layer {
      name: "block_1a_bn_1_branch_1"
      type: "Scale"
      bottom: "block_1a_conv_1_branch_1"
      top: "block_1a_bn_1_branch_1"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "block_1a_bn_shortcut_branch_1"
      type: "Scale"
      bottom: "block_1a_conv_shortcut_branch_1"
      top: "block_1a_bn_shortcut_branch_1"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "activation_2/Relu_branch_1"
      type: "ReLU"
      bottom: "block_1a_bn_1_branch_1"
      top: "activation_2/Relu_branch_1"
    }
    layer {
      name: "block_1a_conv_2_branch_1"
      type: "Convolution"
      bottom: "activation_2/Relu_branch_1"
      top: "block_1a_conv_2_branch_1"
      convolution_param {
        num_output: 64
        pad_h: 1
        pad_w: 1
        kernel_h: 3
        kernel_w: 3
        stride_h: 1
        stride_w: 1
      }
    }
    layer {
      name: "block_1a_bn_2_branch_1"
      type: "Scale"
      bottom: "block_1a_conv_2_branch_1"
      top: "block_1a_bn_2_branch_1"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "add_1_branch_1"
      type: "Eltwise"
      bottom: "block_1a_bn_shortcut_branch_1"
      bottom: "block_1a_bn_2_branch_1"
      top: "add_1_branch_1"
      eltwise_param {
        operation: SUM
      }
    }
    layer {
      name: "activation_3/Relu_branch_1"
      type: "ReLU"
      bottom: "add_1_branch_1"
      top: "activation_3/Relu_branch_1"
    }
    layer {
      name: "block_2a_conv_1_branch_1"
      type: "Convolution"
      bottom: "activation_3/Relu_branch_1"
      top: "block_2a_conv_1_branch_1"
      convolution_param {
        num_output: 128
        pad_h: 1
        pad_w: 1
        kernel_h: 3
        kernel_w: 3
        stride_h: 2
        stride_w: 2
      }
    }
    layer {
      name: "block_2a_conv_shortcut_branch_1"
      type: "Convolution"
      bottom: "activation_3/Relu_branch_1"
      top: "block_2a_conv_shortcut_branch_1"
      convolution_param {
        num_output: 128
        pad_h: 0
        pad_w: 0
        kernel_h: 1
        kernel_w: 1
        stride_h: 2
        stride_w: 2
      }
    }
    layer {
      name: "block_2a_bn_1_branch_1"
      type: "Scale"
      bottom: "block_2a_conv_1_branch_1"
      top: "block_2a_bn_1_branch_1"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "block_2a_bn_shortcut_branch_1"
      type: "Scale"
      bottom: "block_2a_conv_shortcut_branch_1"
      top: "block_2a_bn_shortcut_branch_1"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "activation_4/Relu_branch_1"
      type: "ReLU"
      bottom: "block_2a_bn_1_branch_1"
      top: "activation_4/Relu_branch_1"
    }
    layer {
      name: "block_2a_conv_2_branch_1"
      type: "Convolution"
      bottom: "activation_4/Relu_branch_1"
      top: "block_2a_conv_2_branch_1"
      convolution_param {
        num_output: 128
        pad_h: 1
        pad_w: 1
        kernel_h: 3
        kernel_w: 3
        stride_h: 1
        stride_w: 1
      }
    }
    layer {
      name: "block_2a_bn_2_branch_1"
      type: "Scale"
      bottom: "block_2a_conv_2_branch_1"
      top: "block_2a_bn_2_branch_1"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "add_2_branch_1"
      type: "Eltwise"
      bottom: "block_2a_bn_shortcut_branch_1"
      bottom: "block_2a_bn_2_branch_1"
      top: "add_2_branch_1"
      eltwise_param {
        operation: SUM
      }
    }
    layer {
      name: "activation_5/Relu_branch_1"
      type: "ReLU"
      bottom: "add_2_branch_1"
      top: "activation_5/Relu_branch_1"
    }
    layer {
      name: "block_3a_conv_1_branch_1"
      type: "Convolution"
      bottom: "activation_5/Relu_branch_1"
      top: "block_3a_conv_1_branch_1"
      convolution_param {
        num_output: 256
        pad_h: 1
        pad_w: 1
        kernel_h: 3
        kernel_w: 3
        stride_h: 2
        stride_w: 2
      }
    }
    layer {
      name: "block_3a_conv_shortcut_branch_1"
      type: "Convolution"
      bottom: "activation_5/Relu_branch_1"
      top: "block_3a_conv_shortcut_branch_1"
      convolution_param {
        num_output: 256
        pad_h: 0
        pad_w: 0
        kernel_h: 1
        kernel_w: 1
        stride_h: 2
        stride_w: 2
      }
    }
    layer {
      name: "block_3a_bn_1_branch_1"
      type: "Scale"
      bottom: "block_3a_conv_1_branch_1"
      top: "block_3a_bn_1_branch_1"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "block_3a_bn_shortcut_branch_1"
      type: "Scale"
      bottom: "block_3a_conv_shortcut_branch_1"
      top: "block_3a_bn_shortcut_branch_1"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "activation_6/Relu_branch_1"
      type: "ReLU"
      bottom: "block_3a_bn_1_branch_1"
      top: "activation_6/Relu_branch_1"
    }
    layer {
      name: "block_3a_conv_2_branch_1"
      type: "Convolution"
      bottom: "activation_6/Relu_branch_1"
      top: "block_3a_conv_2_branch_1"
      convolution_param {
        num_output: 256
        pad_h: 1
        pad_w: 1
        kernel_h: 3
        kernel_w: 3
        stride_h: 1
        stride_w: 1
      }
    }
    layer {
      name: "block_3a_bn_2_branch_1"
      type: "Scale"
      bottom: "block_3a_conv_2_branch_1"
      top: "block_3a_bn_2_branch_1"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "add_3_branch_1"
      type: "Eltwise"
      bottom: "block_3a_bn_shortcut_branch_1"
      bottom: "block_3a_bn_2_branch_1"
      top: "add_3_branch_1"
      eltwise_param {
        operation: SUM
      }
    }
    layer {
      name: "activation_7/Relu_branch_1"
      type: "ReLU"
      bottom: "add_3_branch_1"
      top: "activation_7/Relu_branch_1"
    }
    layer {
      name: "block_4a_conv_1_branch_1"
      type: "Convolution"
      bottom: "activation_7/Relu_branch_1"
      top: "block_4a_conv_1_branch_1"
      convolution_param {
        num_output: 512
        pad_h: 1
        pad_w: 1
        kernel_h: 3
        kernel_w: 3
        stride_h: 1
        stride_w: 1
      }
    }
    layer {
      name: "block_4a_conv_shortcut_branch_1"
      type: "Convolution"
      bottom: "activation_7/Relu_branch_1"
      top: "block_4a_conv_shortcut_branch_1"
      convolution_param {
        num_output: 512
        pad_h: 0
        pad_w: 0
        kernel_h: 1
        kernel_w: 1
        stride_h: 1
        stride_w: 1
      }
    }
    layer {
      name: "block_4a_bn_1_branch_1"
      type: "Scale"
      bottom: "block_4a_conv_1_branch_1"
      top: "block_4a_bn_1_branch_1"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "block_4a_bn_shortcut_branch_1"
      type: "Scale"
      bottom: "block_4a_conv_shortcut_branch_1"
      top: "block_4a_bn_shortcut_branch_1"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "activation_8/Relu_branch_1"
      type: "ReLU"
      bottom: "block_4a_bn_1_branch_1"
      top: "activation_8/Relu_branch_1"
    }
    layer {
      name: "block_4a_conv_2_branch_1"
      type: "Convolution"
      bottom: "activation_8/Relu_branch_1"
      top: "block_4a_conv_2_branch_1"
      convolution_param {
        num_output: 512
        pad_h: 1
        pad_w: 1
        kernel_h: 3
        kernel_w: 3
        stride_h: 1
        stride_w: 1
      }
    }
    layer {
      name: "block_4a_bn_2_branch_1"
      type: "Scale"
      bottom: "block_4a_conv_2_branch_1"
      top: "block_4a_bn_2_branch_1"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "add_4_branch_1"
      type: "Eltwise"
      bottom: "block_4a_bn_shortcut_branch_1"
      bottom: "block_4a_bn_2_branch_1"
      top: "add_4_branch_1"
      eltwise_param {
        operation: SUM
      }
    }
    layer {
      name: "activation_9/Relu_branch_1"
      type: "ReLU"
      bottom: "add_4_branch_1"
      top: "activation_9/Relu_branch_1"
    }
    layer {
      name: "conv2d_bbox_branch_1"
      type: "Convolution"
      bottom: "activation_9/Relu_branch_1"
      top: "conv2d_bbox_branch_1"
      convolution_param {
        num_output: 12
        pad_h: 0
        pad_w: 0
        kernel_h: 1
        kernel_w: 1
        stride_h: 1
        stride_w: 1
      }
    }
    layer {
      name: "conv2d_cov_branch_1"
      type: "Convolution"
      bottom: "activation_9/Relu_branch_1"
      top: "conv2d_cov_branch_1"
      convolution_param {
        num_output: 3
        pad_h: 0
        pad_w: 0
        kernel_h: 1
        kernel_w: 1
        stride_h: 1
        stride_w: 1
      }
    }
    layer {
      name: "conv2d_cov/Sigmoid_branch_1"
      type: "Sigmoid"
      bottom: "conv2d_cov_branch_1"
      top: "conv2d_cov/Sigmoid_branch_1"
    }

    # **********************************************************
    # Face detect model
    # **********************************************************

    layer {
      name: "conv1_branch_2"
      type: "Convolution"
      bottom: "data"
      top: "conv1_branch_2"
      convolution_param {
        num_output: 48
        pad_h: 3
        pad_w: 3
        kernel_h: 7
        kernel_w: 7
        stride_h: 2
        stride_w: 2
      }
    }
    layer {
      name: "bn_conv1_branch_2"
      type: "Scale"
      bottom: "conv1_branch_2"
      top: "bn_conv1_branch_2"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "activation_1/Relu_branch_2"
      type: "ReLU"
      bottom: "bn_conv1_branch_2"
      top: "activation_1/Relu_branch_2"
    }
    layer {
      name: "block_1a_conv_1_branch_2"
      type: "Convolution"
      bottom: "activation_1/Relu_branch_2"
      top: "block_1a_conv_1_branch_2"
      convolution_param {
        num_output: 64
        pad_h: 1
        pad_w: 1
        kernel_h: 3
        kernel_w: 3
        stride_h: 2
        stride_w: 2
      }
    }
    layer {
      name: "block_1a_conv_shortcut_branch_2"
      type: "Convolution"
      bottom: "activation_1/Relu_branch_2"
      top: "block_1a_conv_shortcut_branch_2"
      convolution_param {
        num_output: 64
        pad_h: 0
        pad_w: 0
        kernel_h: 1
        kernel_w: 1
        stride_h: 2
        stride_w: 2
      }
    }
    layer {
      name: "block_1a_bn_1_branch_2"
      type: "Scale"
      bottom: "block_1a_conv_1_branch_2"
      top: "block_1a_bn_1_branch_2"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "block_1a_bn_shortcut_branch_2"
      type: "Scale"
      bottom: "block_1a_conv_shortcut_branch_2"
      top: "block_1a_bn_shortcut_branch_2"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "activation_2/Relu_branch_2"
      type: "ReLU"
      bottom: "block_1a_bn_1_branch_2"
      top: "activation_2/Relu_branch_2"
    }
    layer {
      name: "block_1a_conv_2_branch_2"
      type: "Convolution"
      bottom: "activation_2/Relu_branch_2"
      top: "block_1a_conv_2_branch_2"
      convolution_param {
        num_output: 64
        pad_h: 1
        pad_w: 1
        kernel_h: 3
        kernel_w: 3
        stride_h: 1
        stride_w: 1
      }
    }
    layer {
      name: "block_1a_bn_2_branch_2"
      type: "Scale"
      bottom: "block_1a_conv_2_branch_2"
      top: "block_1a_bn_2_branch_2"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "add_1_branch_2"
      type: "Eltwise"
      bottom: "block_1a_bn_shortcut_branch_2"
      bottom: "block_1a_bn_2_branch_2"
      top: "add_1_branch_2"
      eltwise_param {
        operation: SUM
      }
    }
    layer {
      name: "activation_3/Relu_branch_2"
      type: "ReLU"
      bottom: "add_1_branch_2"
      top: "activation_3/Relu_branch_2"
    }
    layer {
      name: "block_2a_conv_1_branch_2"
      type: "Convolution"
      bottom: "activation_3/Relu_branch_2"
      top: "block_2a_conv_1_branch_2"
      convolution_param {
        num_output: 128
        pad_h: 1
        pad_w: 1
        kernel_h: 3
        kernel_w: 3
        stride_h: 2
        stride_w: 2
      }
    }
    layer {
      name: "block_2a_conv_shortcut_branch_2"
      type: "Convolution"
      bottom: "activation_3/Relu_branch_2"
      top: "block_2a_conv_shortcut_branch_2"
      convolution_param {
        num_output: 128
        pad_h: 0
        pad_w: 0
        kernel_h: 1
        kernel_w: 1
        stride_h: 2
        stride_w: 2
      }
    }
    layer {
      name: "block_2a_bn_1_branch_2"
      type: "Scale"
      bottom: "block_2a_conv_1_branch_2"
      top: "block_2a_bn_1_branch_2"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "block_2a_bn_shortcut_branch_2"
      type: "Scale"
      bottom: "block_2a_conv_shortcut_branch_2"
      top: "block_2a_bn_shortcut_branch_2"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "activation_4/Relu_branch_2"
      type: "ReLU"
      bottom: "block_2a_bn_1_branch_2"
      top: "activation_4/Relu_branch_2"
    }
    layer {
      name: "block_2a_conv_2_branch_2"
      type: "Convolution"
      bottom: "activation_4/Relu_branch_2"
      top: "block_2a_conv_2_branch_2"
      convolution_param {
        num_output: 128
        pad_h: 1
        pad_w: 1
        kernel_h: 3
        kernel_w: 3
        stride_h: 1
        stride_w: 1
      }
    }
    layer {
      name: "block_2a_bn_2_branch_2"
      type: "Scale"
      bottom: "block_2a_conv_2_branch_2"
      top: "block_2a_bn_2_branch_2"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "add_2_branch_2"
      type: "Eltwise"
      bottom: "block_2a_bn_shortcut_branch_2"
      bottom: "block_2a_bn_2_branch_2"
      top: "add_2_branch_2"
      eltwise_param {
        operation: SUM
      }
    }
    layer {
      name: "activation_5/Relu_branch_2"
      type: "ReLU"
      bottom: "add_2_branch_2"
      top: "activation_5/Relu_branch_2"
    }
    layer {
      name: "block_3a_conv_1_branch_2"
      type: "Convolution"
      bottom: "activation_5/Relu_branch_2"
      top: "block_3a_conv_1_branch_2"
      convolution_param {
        num_output: 160
        pad_h: 1
        pad_w: 1
        kernel_h: 3
        kernel_w: 3
        stride_h: 2
        stride_w: 2
      }
    }
    layer {
      name: "block_3a_conv_shortcut_branch_2"
      type: "Convolution"
      bottom: "activation_5/Relu_branch_2"
      top: "block_3a_conv_shortcut_branch_2"
      convolution_param {
        num_output: 128
        pad_h: 0
        pad_w: 0
        kernel_h: 1
        kernel_w: 1
        stride_h: 2
        stride_w: 2
      }
    }
    layer {
      name: "block_3a_bn_1_branch_2"
      type: "Scale"
      bottom: "block_3a_conv_1_branch_2"
      top: "block_3a_bn_1_branch_2"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "block_3a_bn_shortcut_branch_2"
      type: "Scale"
      bottom: "block_3a_conv_shortcut_branch_2"
      top: "block_3a_bn_shortcut_branch_2"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "activation_6/Relu_branch_2"
      type: "ReLU"
      bottom: "block_3a_bn_1_branch_2"
      top: "activation_6/Relu_branch_2"
    }
    layer {
      name: "block_3a_conv_2_branch_2"
      type: "Convolution"
      bottom: "activation_6/Relu_branch_2"
      top: "block_3a_conv_2_branch_2"
      convolution_param {
        num_output: 128
        pad_h: 1
        pad_w: 1
        kernel_h: 3
        kernel_w: 3
        stride_h: 1
        stride_w: 1
      }
    }
    layer {
      name: "block_3a_bn_2_branch_2"
      type: "Scale"
      bottom: "block_3a_conv_2_branch_2"
      top: "block_3a_bn_2_branch_2"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "add_3_branch_2"
      type: "Eltwise"
      bottom: "block_3a_bn_shortcut_branch_2"
      bottom: "block_3a_bn_2_branch_2"
      top: "add_3_branch_2"
      eltwise_param {
        operation: SUM
      }
    }
    layer {
      name: "activation_7/Relu_branch_2"
      type: "ReLU"
      bottom: "add_3_branch_2"
      top: "activation_7/Relu_branch_2"
    }
    layer {
      name: "block_4a_conv_1_branch_2"
      type: "Convolution"
      bottom: "activation_7/Relu_branch_2"
      top: "block_4a_conv_1_branch_2"
      convolution_param {
        num_output: 64
        pad_h: 1
        pad_w: 1
        kernel_h: 3
        kernel_w: 3
        stride_h: 1
        stride_w: 1
      }
    }
    layer {
      name: "block_4a_conv_shortcut_branch_2"
      type: "Convolution"
      bottom: "activation_7/Relu_branch_2"
      top: "block_4a_conv_shortcut_branch_2"
      convolution_param {
        num_output: 88
        pad_h: 0
        pad_w: 0
        kernel_h: 1
        kernel_w: 1
        stride_h: 1
        stride_w: 1
      }
    }
    layer {
      name: "block_4a_bn_1_branch_2"
      type: "Scale"
      bottom: "block_4a_conv_1_branch_2"
      top: "block_4a_bn_1_branch_2"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "block_4a_bn_shortcut_branch_2"
      type: "Scale"
      bottom: "block_4a_conv_shortcut_branch_2"
      top: "block_4a_bn_shortcut_branch_2"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "activation_8/Relu_branch_2"
      type: "ReLU"
      bottom: "block_4a_bn_1_branch_2"
      top: "activation_8/Relu_branch_2"
    }
    layer {
      name: "block_4a_conv_2_branch_2"
      type: "Convolution"
      bottom: "activation_8/Relu_branch_2"
      top: "block_4a_conv_2_branch_2"
      convolution_param {
        num_output: 88
        pad_h: 1
        pad_w: 1
        kernel_h: 3
        kernel_w: 3
        stride_h: 1
        stride_w: 1
      }
    }
    layer {
      name: "block_4a_bn_2_branch_2"
      type: "Scale"
      bottom: "block_4a_conv_2_branch_2"
      top: "block_4a_bn_2_branch_2"
      scale_param {
        axis: 1
        bias_term: true
      }
    }
    layer {
      name: "add_4_branch_2"
      type: "Eltwise"
      bottom: "block_4a_bn_shortcut_branch_2"
      bottom: "block_4a_bn_2_branch_2"
      top: "add_4_branch_2"
      eltwise_param {
        operation: SUM
      }
    }
    layer {
      name: "activation_9/Relu_branch_2"
      type: "ReLU"
      bottom: "add_4_branch_2"
      top: "activation_9/Relu_branch_2"
    }
    layer {
      name: "output_bbox_branch_2"
      type: "Convolution"
      bottom: "activation_9/Relu_branch_2"
      top: "output_bbox_branch_2"
      convolution_param {
        num_output: 4
        pad_h: 0
        pad_w: 0
        kernel_h: 1
        kernel_w: 1
        stride_h: 1
        stride_w: 1
      }
    }
    layer {
      name: "output_cov_branch_2"
      type: "Convolution"
      bottom: "activation_9/Relu_branch_2"
      top: "output_cov_branch_2"
      convolution_param {
        num_output: 1
        pad_h: 0
        pad_w: 0
        kernel_h: 1
        kernel_w: 1
        stride_h: 1
        stride_w: 1
      }
    }
    layer {
      name: "output_cov/Sigmoid_branch_2"
      type: "Sigmoid"
      bottom: "output_cov_branch_2"
      top: "output_cov/Sigmoid_branch_2"
    }

    # *************************************************
    # Combining outputs
    # *************************************************

    layer {
      name: "concatenate_cov"
      type: "Concat"
      bottom: "output_cov/Sigmoid_branch_2"
      bottom: "conv2d_cov/Sigmoid_branch_1"
      top: "output_cov"
      concat_param {
        axis: 1
      }
    }
    layer {
      name: "concatenate_bbox"
      type: "Concat"
      bottom: "output_bbox_branch_2"
      bottom: "conv2d_bbox_branch_1"
      top: "output_bbox"
      concat_param {
        axis: 1
      }
    }

And the corresponding config file:

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=fd_lpd_model/fd_lpd.caffemodel
proto-file=fd_lpd_model/fd_lpd.prototxt
labelfile-path=fd_lpd_model/labels.txt
net-stride=16
batch-size=2
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2

num-detected-classes=4
interval=0
gie-unique-id=1
parse-func=4
output-blob-names=output_bbox;output_cov
#output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
#parse-bbox-func-name=NvDsInferParseCustomResnet
#custom-lib-path=/path/to/libnvdsparsebbox.so
#enable-dbscan=1

[class-attrs-all]
threshold=0.2
group-threshold=1
## Set eps=0.7 and minBoxes for enable-dbscan=1
eps=0.2
#minBoxes=3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=1920
detected-max-h=1920

# Per class configuration
# ONLY INTERESTED IN CLASS 0 (Face) AND CLASS 1 (License Plate)
# CHANGE THRESH OF CLASS 1 AND CLASS 2 TO > 1 TO REJECT THE DETCTION
[class-attrs-2]
threshold=1.2
eps=0.5
group-threshold=3
roi-top-offset=20
roi-bottom-offset=10
detected-min-w=40
detected-min-h=40
detected-max-w=400
detected-max-h=800

# Per class configuration
[class-attrs-3]
threshold=1.2
eps=0.5
group-threshold=3
roi-top-offset=20
roi-bottom-offset=10
detected-min-w=40
detected-min-h=40
detected-max-w=400
detected-max-h=800

parser-bbox-norm=35.0;35.0

@neuroSparK

Since this error was reported from TensorRT, I tried to verify your prototxt with TensorRT (from Jetpack 4.4 DP) independently without DeepStream. But there is no problem with this prototxt.

Do you mind sharing your caffemodel so that I can verify prototxt and caffemodel together.

You can use commands like this to verify your prototxt and caffemodel

trtexec --deploy=fd_lpd.prototxt --model=fd_lpd.caffemodel --output=output_bbox,output_cov --saveEngine=fd_lpd.engine

I am using Jetpack 4.4 GA (not DP)

@neuroSparK

You can try the same way on GA to verify if TensorRT can parse this caffe model (prototxt and caffemodel) successfully without DeepStream.

The model has been successfully running with DS 4.1. Its the same model that is used in redaction_with_deepstream sample. It also runs ok on DS5.0 (DP) but not on GA (tested on Xavier)

@neuroSparK

We should narrow down where this error actually comes from.
I think this fault may be from TensorRT, not from DeepStream pipeline. NvDsInfer of DeepStream pipeline uses TensorRT to do inference.

You can try following command to verify if TensorRT from the new Jetpack package (4.4 GA) can handle the prototxt and caffemodel which can work properly on the old Jetpack.

trtexec --deploy=fd_lpd.prototxt --model=fd_lpd.caffemodel --output=output_bbox,output_cov --saveEngine=fd_lpd.engine

@neuroSparK

In case you cannot find trtexec, it is located in /usr/src/tensorrt/bin/

The trtexec built model runs successfully on deepstream. But it fails to build engine from deepstream.

@neuroSparK

Thank you for the information.
We will look into it on Jetpack 4.4 GA

@neuroSparK

Try enabling the following option in [property]:

force-implicit-batch-dim=1