Description
hi,
I try to use tensorRT c++ api to convert a caffe model with shuffle module,since tensort dose not support shuffleChannel layer and Slice Layer converting with caffe. I write the custom Layer of ShuffleChanel plugin and SliceLayer plugin. the TRT model is generated and serialize successfully and can do trt-inference,but the infer result is not correct.
here is the model prototxt:face_qulity_conv_trt.prototxt (55.0 KB)
So, I add outputTensor binding to debug . and compare the infer result of middle layers in the model with the caffe model output, I found that the difference is huge at the begining convolution layer: conv1
here is the compare result :
I checked the log during create ICudaEngine, I found this:
is that means It not get the weights/bias for the convolution layer? but why? if not ,why the difference for the result of first layer is such huge?
here is my code for reference:
converter.zip (170.5 KB)
Environment
TensorRT Version: 7.0.0
GPU Type: 1070
Nvidia Driver Version: 455.45.01
CUDA Version: 10.2
CUDNN Version: 7.6
Operating System + Version: Ubuntu 18.04.4 LTS \n \l
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): NA
PyTorch Version (if applicable): NA
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered