Get warning "Bias weights are not set yet" when converting caffe model with custom plugin

Description

hi,
I try to use tensorRT c++ api to convert a caffe model with shuffle module,since tensort dose not support shuffleChannel layer and Slice Layer converting with caffe. I write the custom Layer of ShuffleChanel plugin and SliceLayer plugin. the TRT model is generated and serialize successfully and can do trt-inference,but the infer result is not correct.
here is the model prototxt:face_qulity_conv_trt.prototxt (55.0 KB)

So, I add outputTensor binding to debug . and compare the infer result of middle layers in the model with the caffe model output, I found that the difference is huge at the begining convolution layer: conv1
here is the compare result :
image

I checked the log during create ICudaEngine, I found this:

is that means It not get the weights/bias for the convolution layer? but why? if not ,why the difference for the result of first layer is such huge?

here is my code for reference:
converter.zip (170.5 KB)

Environment

TensorRT Version: 7.0.0
GPU Type: 1070
Nvidia Driver Version: 455.45.01
CUDA Version: 10.2
CUDNN Version: 7.6
Operating System + Version: Ubuntu 18.04.4 LTS \n \l
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): NA
PyTorch Version (if applicable): NA
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

full log for trt converting is here:
1.log (660.4 KB)

Hi @529683504,

Could you please confirm which version TensorRT are you using.

Thank you.

its 7.0.0

Hi @529683504,

Looks like the layers don’t have a bias term, e.g:

layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  convolution_param {
    num_output: 24
    bias_term: false
    pad: 1
    kernel_size: 3
    stride: 2   
    weight_filler {
      type: "msra"
    }
  }
}

So the warning isn’t surprising. We request you to please check model definition.

Thank you.

I checked the covert code with my python script use to convert the caffe model without custom model, there is no difference. what’s your meaning of “check model definition”? and for this phenomenon how to debug?

BTW, I used the same prototext file for getting inference result of caffe model and converting the trt model

I solved this problem! I found some layer‘s output has same name after check the prototxt carefuly, for example the output of layer conv1,conv1_bn,conv1_scale,conv1_relu are all named conv1.


So I guess maybe the huge difference is made by choosing different output layers(because the layer fusing is happened during converting the model). So I ignored this problem and mark more tensor as output(except the tensor has same name). I found the difference is begining from the second branch of first slice layer.So I checked the code of sliceLayer(its custom layer and implemented by me) and found the mistake. Then I solved it!
image

1 Like