In TensorRT 5.1 RC following code
nvcaffeparser1::ICaffeParser *parser = nvcaffeparser1::createCaffeParser();
const nvcaffeparser1::IBlobNameToTensor *blobNameToTensor = parser->parseBuffers(protoContent.constData(), protoContent.size(), modelContent.constData(), modelContent.size(), *network, nvinfer1::DataType::kFLOAT);
result in “could not parse layer type Slice” message and blobNameToTensor==nullPtr.
Which suggests it fails to process caffe model files which have “Slice” layer.
The “Slice” layer supposed to be supported by TensorRT 5.1.
I have the same problem like yours.
Have you find the solution yet?
Hi @lwy8976,
There is no easy solution. It would be nice if TensortRT could support “Slice” layer in nvcaffeparser, however there are two options. First is to convert your model “manually” by reconstructing your model layer by layer and copying weights using TensorRT API, i.e. doing what nvcaffeparser supposed to do. The other way (which is I am currently doing) is to restructure you model and remove “Slice” layer. In my case “Slice” layer is after convolution layer and effectively slicing output of 64 convolutions into 2x32. So computational equivalent is passing input through 2x convolution layers with 32 outputs. It is even possible to convert already trained model, you just need to be a bit clever and correctly copy weights. However, this way you will probably notice some degradation in speed (this is, perhaps, due to internal GPU optimisations or/and caffe framework overheads, theoretically there should be same number of additions and multiplications). In my tests in caffe framework I get about 10% decrease in inference speed if I “preslice” model. But for tensorrt it might be not the case.