the output shape of Tensorrt is wrong

I convert a tensorflow model to UFF. Howerver, the output shape is wrong.
The input shape is (3, 225, 300) (parser.register_input(“image_tensor”, (3, 225, 300), 0)), after a convolution(kernel=[3, 3], stride=2, depth=32, padding=same),the shape should be (1, 113, 150, 32), but I get the output shape (537600 = 1* 11215032)

The parser information is as below:
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/Conv2d_0/weights
[TensorRT] INFO: UFFParser: parsing FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D
[TensorRT] INFO: UFFParser: Convolution: Left: 1
[TensorRT] INFO: UFFParser: Convolution: Right: 1
[TensorRT] INFO: UFFParser: Convolution: Top: 0
[TensorRT] INFO: UFFParser: Convolution: Bottom: 1

It seems that when the width of the input is not equal to the height of the input, the output is wrong.

Hi,

TensorRT use the full type convolution(padding=same):
H: f(x, k, p, s) = ceil((x+2p-k)/s)+1 = ceil((225+20-3)/2)+1 = 112.
W: f(x, k, p, s) = ceil((x+2p-k)/s)+1 = ceil((300+20-3)/2)+1 = 150.

The output of TensorRT follows this rule correctly and information can be found at:
/usr/share/doc/tensorrt/html/classnvinfer1_1_1_i_output_dimensions_formula.html

Could you share how you get the output dimension of (1, 113, 150, 32)?
Thanks.

Hi AastaLLL:

When I modify the input shape to (3, 225, 225), the output shape of tensorrt become 408608 = 1113113*32.

I get the output dimension of (1, 113, 150, 32) from the original tensorflow model.

In tensorflow ,for the ‘SAME’ padding, the output height and width are computed as:

out_height = ceil(float(in_height) / float(strides[1]))
out_width = ceil(float(in_width) / float(strides[2]))

Thanks

Hi,

Happy New Year :)

We are checking this issue and will update information with you later.
Thanks and sorry for the late reply.

Hi,

There are several TensorFlow API for the convolution usage.
Could you share which API do you use?

Thanks.

Hi AastaLLL:

I use slim.conv2d

Thanks

Hi,

We can reproduce this issue now and discuss it internally.
Will update information later.

Thanks.

Hi,

We have clarified the root cause and updated a fix for it.
The fix will be included in our future release.

Hi AastaLLL

Have the issue been fixed?

Thanks
Bryan

Hi,

This issue is fixed internally.
The fix will be included in our next TensorRT release.

Thanks.

Hi AastaLLL,

TenosrRT 4.0 RC is released today.

Is this issue fixed that release?

Yes.

This issue is fixed in our latest TensorRT 4.0: