Error on using a downloaded model mobilenet ssd

Please provide complete information as applicable to your setup.
The following error in attached screenshot is coming while using Tensorrt. Kindly help. Also I want to save the serialized engine to be used in Deepstream


**• Hardware Platform (Jetson / GPU)**T4
• DeepStream Version4.0
• JetPack Version (valid for Jetson only)
• TensorRT Version5.1.5
• NVIDIA GPU Driver Version (valid for GPU only)

@GokuDDG

Please share us your caffemodel, especially the prototxt file.

Sharing the file How do i share the model and prototxt to you as I cannot attach the same. I have pasted the prototxt to you.
name: “VGG_VOC0712_SSD_300x300_deploy”
input: “data”
input_shape {
dim: 1
dim: 3
dim: 300
dim: 300
}
layer {
name: “conv1_1”
type: “Convolution”
bottom: “data”
top: “conv1_1”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “relu1_1”
type: “ReLU”
bottom: “conv1_1”
top: “conv1_1”
}
layer {
name: “conv1_2”
type: “Convolution”
bottom: “conv1_1”
top: “conv1_2”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 64
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “relu1_2”
type: “ReLU”
bottom: “conv1_2”
top: “conv1_2”
}
layer {
name: “pool1”
type: “Pooling”
bottom: “conv1_2”
top: “pool1”
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: “conv2_1”
type: “Convolution”
bottom: “pool1”
top: “conv2_1”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “relu2_1”
type: “ReLU”
bottom: “conv2_1”
top: “conv2_1”
}
layer {
name: “conv2_2”
type: “Convolution”
bottom: “conv2_1”
top: “conv2_2”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “relu2_2”
type: “ReLU”
bottom: “conv2_2”
top: “conv2_2”
}
layer {
name: “pool2”
type: “Pooling”
bottom: “conv2_2”
top: “pool2”
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: “conv3_1”
type: “Convolution”
bottom: “pool2”
top: “conv3_1”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “relu3_1”
type: “ReLU”
bottom: “conv3_1”
top: “conv3_1”
}
layer {
name: “conv3_2”
type: “Convolution”
bottom: “conv3_1”
top: “conv3_2”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “relu3_2”
type: “ReLU”
bottom: “conv3_2”
top: “conv3_2”
}
layer {
name: “conv3_3”
type: “Convolution”
bottom: “conv3_2”
top: “conv3_3”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “relu3_3”
type: “ReLU”
bottom: “conv3_3”
top: “conv3_3”
}
layer {
name: “pool3”
type: “Pooling”
bottom: “conv3_3”
top: “pool3”
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: “conv4_1”
type: “Convolution”
bottom: “pool3”
top: “conv4_1”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “relu4_1”
type: “ReLU”
bottom: “conv4_1”
top: “conv4_1”
}
layer {
name: “conv4_2”
type: “Convolution”
bottom: “conv4_1”
top: “conv4_2”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “relu4_2”
type: “ReLU”
bottom: “conv4_2”
top: “conv4_2”
}
layer {
name: “conv4_3”
type: “Convolution”
bottom: “conv4_2”
top: “conv4_3”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “relu4_3”
type: “ReLU”
bottom: “conv4_3”
top: “conv4_3”
}
layer {
name: “pool4”
type: “Pooling”
bottom: “conv4_3”
top: “pool4”
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: “conv5_1”
type: “Convolution”
bottom: “pool4”
top: “conv5_1”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
dilation: 1
}
}
layer {
name: “relu5_1”
type: “ReLU”
bottom: “conv5_1”
top: “conv5_1”
}
layer {
name: “conv5_2”
type: “Convolution”
bottom: “conv5_1”
top: “conv5_2”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
dilation: 1
}
}
layer {
name: “relu5_2”
type: “ReLU”
bottom: “conv5_2”
top: “conv5_2”
}
layer {
name: “conv5_3”
type: “Convolution”
bottom: “conv5_2”
top: “conv5_3”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
dilation: 1
}
}
layer {
name: “relu5_3”
type: “ReLU”
bottom: “conv5_3”
top: “conv5_3”
}
layer {
name: “pool5”
type: “Pooling”
bottom: “conv5_3”
top: “pool5”
pooling_param {
pool: MAX
kernel_size: 3
stride: 1
pad: 1
}
}
layer {
name: “fc6”
type: “Convolution”
bottom: “pool5”
top: “fc6”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 1024
pad: 6
kernel_size: 3
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
dilation: 6
}
}
layer {
name: “relu6”
type: “ReLU”
bottom: “fc6”
top: “fc6”
}
layer {
name: “fc7”
type: “Convolution”
bottom: “fc6”
top: “fc7”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 1024
kernel_size: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “relu7”
type: “ReLU”
bottom: “fc7”
top: “fc7”
}
layer {
name: “conv6_1”
type: “Convolution”
bottom: “fc7”
top: “conv6_1”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 0
kernel_size: 1
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “conv6_1_relu”
type: “ReLU”
bottom: “conv6_1”
top: “conv6_1”
}
layer {
name: “conv6_2”
type: “Convolution”
bottom: “conv6_1”
top: “conv6_2”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 512
pad: 1
kernel_size: 3
stride: 2
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “conv6_2_relu”
type: “ReLU”
bottom: “conv6_2”
top: “conv6_2”
}
layer {
name: “conv7_1”
type: “Convolution”
bottom: “conv6_2”
top: “conv7_1”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 0
kernel_size: 1
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “conv7_1_relu”
type: “ReLU”
bottom: “conv7_1”
top: “conv7_1”
}
layer {
name: “conv7_2”
type: “Convolution”
bottom: “conv7_1”
top: “conv7_2”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 1
kernel_size: 3
stride: 2
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “conv7_2_relu”
type: “ReLU”
bottom: “conv7_2”
top: “conv7_2”
}
layer {
name: “conv8_1”
type: “Convolution”
bottom: “conv7_2”
top: “conv8_1”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 0
kernel_size: 1
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “conv8_1_relu”
type: “ReLU”
bottom: “conv8_1”
top: “conv8_1”
}
layer {
name: “conv8_2”
type: “Convolution”
bottom: “conv8_1”
top: “conv8_2”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 0
kernel_size: 3
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “conv8_2_relu”
type: “ReLU”
bottom: “conv8_2”
top: “conv8_2”
}
layer {
name: “conv9_1”
type: “Convolution”
bottom: “conv8_2”
top: “conv9_1”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 128
pad: 0
kernel_size: 1
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “conv9_1_relu”
type: “ReLU”
bottom: “conv9_1”
top: “conv9_1”
}
layer {
name: “conv9_2”
type: “Convolution”
bottom: “conv9_1”
top: “conv9_2”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 256
pad: 0
kernel_size: 3
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “conv9_2_relu”
type: “ReLU”
bottom: “conv9_2”
top: “conv9_2”
}
layer {
name: “conv4_3_norm”
type: “Normalize”
bottom: “conv4_3”
top: “conv4_3_norm”
norm_param {
across_spatial: false
scale_filler {
type: “constant”
value: 20
}
channel_shared: false
}
}
layer {
name: “conv4_3_norm_mbox_loc”
type: “Convolution”
bottom: “conv4_3_norm”
top: “conv4_3_norm_mbox_loc”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 16
pad: 1
kernel_size: 3
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “conv4_3_norm_mbox_loc_perm”
type: “Permute”
bottom: “conv4_3_norm_mbox_loc”
top: “conv4_3_norm_mbox_loc_perm”
permute_param {
order: 0
order: 2
order: 3
order: 1
}
}
layer {
name: “conv4_3_norm_mbox_loc_flat”
type: “Flatten”
bottom: “conv4_3_norm_mbox_loc_perm”
top: “conv4_3_norm_mbox_loc_flat”
flatten_param {
axis: 1
}
}
layer {
name: “conv4_3_norm_mbox_conf”
type: “Convolution”
bottom: “conv4_3_norm”
top: “conv4_3_norm_mbox_conf”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 84
pad: 1
kernel_size: 3
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “conv4_3_norm_mbox_conf_perm”
type: “Permute”
bottom: “conv4_3_norm_mbox_conf”
top: “conv4_3_norm_mbox_conf_perm”
permute_param {
order: 0
order: 2
order: 3
order: 1
}
}
layer {
name: “conv4_3_norm_mbox_conf_flat”
type: “Flatten”
bottom: “conv4_3_norm_mbox_conf_perm”
top: “conv4_3_norm_mbox_conf_flat”
flatten_param {
axis: 1
}
}
layer {
name: “conv4_3_norm_mbox_priorbox”
type: “PriorBox”
bottom: “conv4_3_norm”
bottom: “data”
top: “conv4_3_norm_mbox_priorbox”
prior_box_param {
min_size: 30.0
max_size: 60.0
aspect_ratio: 2
flip: true
clip: false
variance: 0.1
variance: 0.1
variance: 0.2
variance: 0.2
step: 8
offset: 0.5
}
}
layer {
name: “fc7_mbox_loc”
type: “Convolution”
bottom: “fc7”
top: “fc7_mbox_loc”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 24
pad: 1
kernel_size: 3
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “fc7_mbox_loc_perm”
type: “Permute”
bottom: “fc7_mbox_loc”
top: “fc7_mbox_loc_perm”
permute_param {
order: 0
order: 2
order: 3
order: 1
}
}
layer {
name: “fc7_mbox_loc_flat”
type: “Flatten”
bottom: “fc7_mbox_loc_perm”
top: “fc7_mbox_loc_flat”
flatten_param {
axis: 1
}
}
layer {
name: “fc7_mbox_conf”
type: “Convolution”
bottom: “fc7”
top: “fc7_mbox_conf”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 126
pad: 1
kernel_size: 3
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “fc7_mbox_conf_perm”
type: “Permute”
bottom: “fc7_mbox_conf”
top: “fc7_mbox_conf_perm”
permute_param {
order: 0
order: 2
order: 3
order: 1
}
}
layer {
name: “fc7_mbox_conf_flat”
type: “Flatten”
bottom: “fc7_mbox_conf_perm”
top: “fc7_mbox_conf_flat”
flatten_param {
axis: 1
}
}
layer {
name: “fc7_mbox_priorbox”
type: “PriorBox”
bottom: “fc7”
bottom: “data”
top: “fc7_mbox_priorbox”
prior_box_param {
min_size: 60.0
max_size: 111.0
aspect_ratio: 2
aspect_ratio: 3
flip: true
clip: false
variance: 0.1
variance: 0.1
variance: 0.2
variance: 0.2
step: 16
offset: 0.5
}
}
layer {
name: “conv6_2_mbox_loc”
type: “Convolution”
bottom: “conv6_2”
top: “conv6_2_mbox_loc”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 24
pad: 1
kernel_size: 3
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “conv6_2_mbox_loc_perm”
type: “Permute”
bottom: “conv6_2_mbox_loc”
top: “conv6_2_mbox_loc_perm”
permute_param {
order: 0
order: 2
order: 3
order: 1
}
}
layer {
name: “conv6_2_mbox_loc_flat”
type: “Flatten”
bottom: “conv6_2_mbox_loc_perm”
top: “conv6_2_mbox_loc_flat”
flatten_param {
axis: 1
}
}
layer {
name: “conv6_2_mbox_conf”
type: “Convolution”
bottom: “conv6_2”
top: “conv6_2_mbox_conf”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 126
pad: 1
kernel_size: 3
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “conv6_2_mbox_conf_perm”
type: “Permute”
bottom: “conv6_2_mbox_conf”
top: “conv6_2_mbox_conf_perm”
permute_param {
order: 0
order: 2
order: 3
order: 1
}
}
layer {
name: “conv6_2_mbox_conf_flat”
type: “Flatten”
bottom: “conv6_2_mbox_conf_perm”
top: “conv6_2_mbox_conf_flat”
flatten_param {
axis: 1
}
}
layer {
name: “conv6_2_mbox_priorbox”
type: “PriorBox”
bottom: “conv6_2”
bottom: “data”
top: “conv6_2_mbox_priorbox”
prior_box_param {
min_size: 111.0
max_size: 162.0
aspect_ratio: 2
aspect_ratio: 3
flip: true
clip: false
variance: 0.1
variance: 0.1
variance: 0.2
variance: 0.2
step: 32
offset: 0.5
}
}
layer {
name: “conv7_2_mbox_loc”
type: “Convolution”
bottom: “conv7_2”
top: “conv7_2_mbox_loc”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 24
pad: 1
kernel_size: 3
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “conv7_2_mbox_loc_perm”
type: “Permute”
bottom: “conv7_2_mbox_loc”
top: “conv7_2_mbox_loc_perm”
permute_param {
order: 0
order: 2
order: 3
order: 1
}
}
layer {
name: “conv7_2_mbox_loc_flat”
type: “Flatten”
bottom: “conv7_2_mbox_loc_perm”
top: “conv7_2_mbox_loc_flat”
flatten_param {
axis: 1
}
}
layer {
name: “conv7_2_mbox_conf”
type: “Convolution”
bottom: “conv7_2”
top: “conv7_2_mbox_conf”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 126
pad: 1
kernel_size: 3
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “conv7_2_mbox_conf_perm”
type: “Permute”
bottom: “conv7_2_mbox_conf”
top: “conv7_2_mbox_conf_perm”
permute_param {
order: 0
order: 2
order: 3
order: 1
}
}
layer {
name: “conv7_2_mbox_conf_flat”
type: “Flatten”
bottom: “conv7_2_mbox_conf_perm”
top: “conv7_2_mbox_conf_flat”
flatten_param {
axis: 1
}
}
layer {
name: “conv7_2_mbox_priorbox”
type: “PriorBox”
bottom: “conv7_2”
bottom: “data”
top: “conv7_2_mbox_priorbox”
prior_box_param {
min_size: 162.0
max_size: 213.0
aspect_ratio: 2
aspect_ratio: 3
flip: true
clip: false
variance: 0.1
variance: 0.1
variance: 0.2
variance: 0.2
step: 64
offset: 0.5
}
}
layer {
name: “conv8_2_mbox_loc”
type: “Convolution”
bottom: “conv8_2”
top: “conv8_2_mbox_loc”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 16
pad: 1
kernel_size: 3
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “conv8_2_mbox_loc_perm”
type: “Permute”
bottom: “conv8_2_mbox_loc”
top: “conv8_2_mbox_loc_perm”
permute_param {
order: 0
order: 2
order: 3
order: 1
}
}
layer {
name: “conv8_2_mbox_loc_flat”
type: “Flatten”
bottom: “conv8_2_mbox_loc_perm”
top: “conv8_2_mbox_loc_flat”
flatten_param {
axis: 1
}
}
layer {
name: “conv8_2_mbox_conf”
type: “Convolution”
bottom: “conv8_2”
top: “conv8_2_mbox_conf”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 84
pad: 1
kernel_size: 3
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “conv8_2_mbox_conf_perm”
type: “Permute”
bottom: “conv8_2_mbox_conf”
top: “conv8_2_mbox_conf_perm”
permute_param {
order: 0
order: 2
order: 3
order: 1
}
}
layer {
name: “conv8_2_mbox_conf_flat”
type: “Flatten”
bottom: “conv8_2_mbox_conf_perm”
top: “conv8_2_mbox_conf_flat”
flatten_param {
axis: 1
}
}
layer {
name: “conv8_2_mbox_priorbox”
type: “PriorBox”
bottom: “conv8_2”
bottom: “data”
top: “conv8_2_mbox_priorbox”
prior_box_param {
min_size: 213.0
max_size: 264.0
aspect_ratio: 2
flip: true
clip: false
variance: 0.1
variance: 0.1
variance: 0.2
variance: 0.2
step: 100
offset: 0.5
}
}
layer {
name: “conv9_2_mbox_loc”
type: “Convolution”
bottom: “conv9_2”
top: “conv9_2_mbox_loc”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 16
pad: 1
kernel_size: 3
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “conv9_2_mbox_loc_perm”
type: “Permute”
bottom: “conv9_2_mbox_loc”
top: “conv9_2_mbox_loc_perm”
permute_param {
order: 0
order: 2
order: 3
order: 1
}
}
layer {
name: “conv9_2_mbox_loc_flat”
type: “Flatten”
bottom: “conv9_2_mbox_loc_perm”
top: “conv9_2_mbox_loc_flat”
flatten_param {
axis: 1
}
}
layer {
name: “conv9_2_mbox_conf”
type: “Convolution”
bottom: “conv9_2”
top: “conv9_2_mbox_conf”
param {
lr_mult: 1
decay_mult: 1
}
param {
lr_mult: 2
decay_mult: 0
}
convolution_param {
num_output: 84
pad: 1
kernel_size: 3
stride: 1
weight_filler {
type: “xavier”
}
bias_filler {
type: “constant”
value: 0
}
}
}
layer {
name: “conv9_2_mbox_conf_perm”
type: “Permute”
bottom: “conv9_2_mbox_conf”
top: “conv9_2_mbox_conf_perm”
permute_param {
order: 0
order: 2
order: 3
order: 1
}
}
layer {
name: “conv9_2_mbox_conf_flat”
type: “Flatten”
bottom: “conv9_2_mbox_conf_perm”
top: “conv9_2_mbox_conf_flat”
flatten_param {
axis: 1
}
}
layer {
name: “conv9_2_mbox_priorbox”
type: “PriorBox”
bottom: “conv9_2”
bottom: “data”
top: “conv9_2_mbox_priorbox”
prior_box_param {
min_size: 264.0
max_size: 315.0
aspect_ratio: 2
flip: true
clip: false
variance: 0.1
variance: 0.1
variance: 0.2
variance: 0.2
step: 300
offset: 0.5
}
}
layer {
name: “mbox_loc”
type: “Concat”
bottom: “conv4_3_norm_mbox_loc_flat”
bottom: “fc7_mbox_loc_flat”
bottom: “conv6_2_mbox_loc_flat”
bottom: “conv7_2_mbox_loc_flat”
bottom: “conv8_2_mbox_loc_flat”
bottom: “conv9_2_mbox_loc_flat”
top: “mbox_loc”
concat_param {
axis: 1
}
}
layer {
name: “mbox_conf”
type: “Concat”
bottom: “conv4_3_norm_mbox_conf_flat”
bottom: “fc7_mbox_conf_flat”
bottom: “conv6_2_mbox_conf_flat”
bottom: “conv7_2_mbox_conf_flat”
bottom: “conv8_2_mbox_conf_flat”
bottom: “conv9_2_mbox_conf_flat”
top: “mbox_conf”
concat_param {
axis: 1
}
}
layer {
name: “mbox_priorbox”
type: “Concat”
bottom: “conv4_3_norm_mbox_priorbox”
bottom: “fc7_mbox_priorbox”
bottom: “conv6_2_mbox_priorbox”
bottom: “conv7_2_mbox_priorbox”
bottom: “conv8_2_mbox_priorbox”
bottom: “conv9_2_mbox_priorbox”
top: “mbox_priorbox”
concat_param {
axis: 2
}
}
layer {
name: “mbox_conf_reshape”
type: “Reshape”
bottom: “mbox_conf”
top: “mbox_conf_reshape”
reshape_param {
shape {
dim: 0
dim: -1
dim: 21
}
}
}
layer {
name: “mbox_conf_softmax”
type: “Softmax”
bottom: “mbox_conf_reshape”
top: “mbox_conf_softmax”
softmax_param {
axis: 2
}
}
layer {
name: “mbox_conf_flatten”
type: “Flatten”
bottom: “mbox_conf_softmax”
top: “mbox_conf_flatten”
flatten_param {
axis: 1
}
}
layer {
name: “detection_out”
type: “DetectionOutput”
bottom: “mbox_loc”
bottom: “mbox_conf_flatten”
bottom: “mbox_priorbox”
top: “detection_out”
include {
phase: TEST
}
detection_output_param {
num_classes: 21
share_location: true
background_label_id: 0
nms_param {
nms_threshold: 0.45
top_k: 400
}
save_output_param {
label_map_file: “data/VOC0712/labelmap_voc.prototxt”
}
code_type: CENTER_SIZE
keep_top_k: 200
confidence_threshold: 0.01
}
}

@GokuDDG

Since we will no longer add new fixes to tensorRT’s caffe parser, one quick workaround is to replace all flatten layers with reshape layers.

For example, if flatten is to convert (b, c, h, w) into (b, c * h * w), then the alternative reshape layer could do something similar like this to convert (b, c, h, w) into (b, c * h * w, 1, 1):

layer {
  name: "conv4_3_norm_mbox_loc_flat"
  type: "Reshape"
  bottom: "conv4_3_norm_mbox_loc_perm"
  top: "conv4_3_norm_mbox_loc_flat"
  reshape_param {
    shape {
      dim: 0
      dim: -1
      dim: 1
      dim: 1
    }
  }
}

Hi @ersheng,
So in 5.0 can we directly give .caffemodel and .prototxt in the NVINFER_config.txt. Are you suggesting to us to move to 5.0

@GokuDDG

You can move to DS 5.0, but TensorRT in DS 5.0 does not support caffe flatten layers either.
There will be no longer new fixes for CAFFE parser. ONNX parser will be main support in the future.
You can convert caffemodel into engine separately with trtexec in your current setup, and then give location of engine file to the nvinfer config.

How do we use the trtexec can you share a link?

/usr/src/tensorrt/bin/trtexec

Hi @ersheng,
Can you please give or suggest what command in /usr/src/tensorrt/bin/trtexec we should use or a similar example to convert the caffemodel as reference.
Do you mean trtexec --deploy=/path/to/mnist.prototxt --model=/path/to/mnist.caffemodel --output=prob --batch=16 --saveEngine=mnist16.trt
Thanks in advance

Yes, you can try this command

Hi @ersheng,

sudo ./trtexec --deploy=/usr/src/tensorrt/data/ssd/ssd.prototxt --model=/usr/src/tensorrt/data/ssd/VGG_VOC0712_SSD_300x300_iter_120000.caffemodel --output=detection_out --batch=16 --saveEngine=mnist16.trt
&&&& RUNNING TensorRT.trtexec # ./trtexec --deploy=/usr/src/tensorrt/data/ssd/ssd.prototxt --model=/usr/src/tensorrt/data/ssd/VGG_VOC0712_SSD_300x300_iter_120000.caffemodel --output=detection_out --batch=16 --saveEngine=mnist16.trt
[I] deploy: /usr/src/tensorrt/data/ssd/ssd.prototxt
[I] model: /usr/src/tensorrt/data/ssd/VGG_VOC0712_SSD_300x300_iter_120000.caffemodel
[I] output: detection_out
[I] batch: 16
[I] saveEngine: mnist16.trt
Plugin layer output count is not equal to caffe output count
[E] Engine could not be created
[E] Engine could not be created
&&&& FAILED TensorRT.trtexec # ./trtexec --deploy=/usr/src/tensorrt/data/ssd/ssd.prototxt --model=/usr/src/tensorrt/data/ssd/VGG_VOC0712_SSD_300x300_iter_120000.caffemodel --output=detection_out --batch=16 --saveEngine=mnist16.trt
This is the error we got after following your given steps.Kindly help us out.
Thanks in advance.