Cannot convert caffe pooling layer with kernel_size 1 and stride 2 to TensorRT

I have a caffe model that successfully converted to tensorrt 5.0. However, it fails when I upgrade to TensorRT 7.0. After looking to the error code, I found that the exception happened when converting pool layer to TensorRT. The snippet below is the setup for pooling in the protobuf file:

layer {
  name: "output"
  type: "Pooling"
  bottom: "input"
  top: "input"
  pooling_param {
    pool: MAX
    kernel_size: 1
    stride: 2
  }
}

and the error message is shown below

output cannot use Caffe round up padding mode with padding greater than the filter.

From the documentation https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/c_api/namespacenvinfer1.html#af0cf8e1034112a1472f3a4bc00f5de62
It seems the problem comes from the restriction 2

CAFFE_ROUND_UP: B >= (F + 1) is an error if (B + S) >= (F + 1)

I don’t know why there’s such limitation and even if I add pad: 0, it does not help.
Does anyone have the same problem ? Any help would be appreciated.

Hi,

These restrictions are from CAFFE.
CAFFE_ROUND_DOWN and CAFFE_ROUND_UP modes are consistent with CAFFE framework.

Thanks

I didn’t change my prototxt but got the above error after upgrading to tensorrt7. Would it be possible that there are some bugs in the conversion code?

Hi,

Can you provide the following information so we can better help?
Provide details on the platforms you are using:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow/PyTorch/Caffe version
o TensorRT version

Please share the script & model file to reproduce the issue.

Thanks

I got same issue here.

my system env:

Ubuntu 19.10
TensorRT 7
CUDA10.2

why tensorrt 5 and 6 can convert such situation while trt 7 failed?

“Padding = 0 assertion failure” is fixed and should be available in next release.
We are deprecating Caffe Parser and UFF Parser in TensorRT 7. Hence going forward, we are recommending the onnx parser path.

Thanks