Description
Add poolinglayer on trt 5.1.3 and trt 7.1.3 respectively, the configuration of the pooling layer is as follows:
pool->setPaddingMode(nvinfer1::PaddingMode::kCAFFE_ROUND_UP);
pool->setStrideNd(nvinfer1::Dims2(stride[0], stride[1]));
pool->setPaddingNd(nvinfer1::Dims2(pad[0], pad[1]));
pool->setAverageCountExcludesPadding(false);
The parameters for the test case are: input[1,1,4,5] output[1,1,1,2] kernelsize[4,4] stride[2,2] pad[0,0] ,pool_type[ave],and each element of the input is set to 1.
In trt5.1.3, the value of output spatial is [1,1], which is also the same as the result of caffe. But in trt 7.1.3, the output spatial value is [1,0.75], and the calculation method looks different from caffe.
I would like to ask whether the reference results of these two versions are different because of my settings, or tensorrt has updated the calculation method of ave pooling?
Environment
TensorRT Version: 7.1.3
GPU Type: rtx-2070
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version: 8.0.2
Operating System + Version: 16.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered