TensorRT max pooling error for different kernel and stride combinations

Here is the first pooling layer of VGG-16 net. I changed the pooling and stride to 3 but got full of zeros as the output. I tried various combinations of kernel(K) and stride(S) sizes. There seem to be a buggy output:

K=2, S=2 : OK (exactly the same result with caffe output)
K=2, S=3,4,5 : ZERO (Full of zeros)
K=3, S=2 : ok (difference to caffe output is around 10e-6 in average)
K=3, S=3,4,5 : ZERO
K=4, S=2,3,5 : OK
K=4, S=4 : ok
K=5, S=2,3,4,5 : ZERO

Here is the network prototype used for the experiment:

name: "deploy"
state {
  phase: TEST
  level: 0
}
layer {
  name: "input"
  type: "Input"
  top: "data"
  input_param {
    shape {
      dim: 1
      dim: 3
      dim: 300
      dim: 300
    }
  }
}
layer {
  name: "vgg_conv_1"
  type: "Convolution"
  bottom: "data"
  top: "vgg_conv_1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "constant"
      value: 0
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "relu1_1"
  type: "ReLU"
  bottom: "vgg_conv_1"
  top: "vgg_conv_1"
}
layer {
  name: "vgg_conv_2"
  type: "Convolution"
  bottom: "vgg_conv_1"
  top: "vgg_conv_2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}
layer {
  name: "relu1_2"
  type: "ReLU"
  bottom: "vgg_conv_2"
  top: "vgg_conv_2"
}
layer {
  name: "pool1"
  type: "Pooling"
  bottom: "vgg_conv_2"
  top: "output"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 3
  }
}

Is there any comment for the situation?

Hi
did you solve this problem?
which version tensorRT for your test?
I encounter a problem like your…

We created a new “Deep Learning Training and Inference” section in Devtalk to improve the experience for deep learning and accelerated computing, and HPC users:
https://devtalk.nvidia.com/default/board/301/deep-learning-training-and-inference-/

We are moving active deep learning threads to the new section.

URLs for topics will not change with the re-categorization. So your bookmarks and links will continue to work as earlier.

-Siddharth

Hello, did you solve this problem ?

Under TensorRT 3.0 i have this error when i run code on my Jetson TX2, but i dont have this error when i launch it on my TitanX Maxwell

David

Happens to me as well.

Could you check the output of UFF-to-plan compiler - in my case it adds a new “Unnamed [Padding]” layer just before second appearance of MaxPool in a model. This MaxPool then produces different result compared to an original TensorFlow model.
Since first MaxPool in a model doesn’t have this layer in front and works properly - I believe the problem is not in MaxPool implementation but somehow related to this addition/compiler…