TensorRT3 gives a wrong shape for Transposed Convolution


I am using TensorRT 3.0.1 and TensorFlow 1.3.
I am using transposed convolution in my model and I do element-wise sum on the output of the transposed convolution and the output of a convolution (from a previous layer). It worked fine during training.

When I try to optimize with TensorRT, it gives me shape mismatch error on element-wise operation. I set the padding parameter as ‘same’ in all conv and trans conv layers. Here is the error,

Using output node dev_0/cnn/decoder/conv2d_transpose_2/conv2d_transpose
Converting to UFF graph
No. nodes: 142
[TensorRT] ERROR: dev_0/cnn/decoder/add: all elementwise inputs must have same dimensions or follow the broadcasting rules
[TensorRT] ERROR: Failed to create engine

Is TensorRT doing additional paddings while conversion to UFF graph? Please help me to resolve this.

Best Regards


This is a known issue.

For TensorRT 3, reshape only apply on constant weights.
Tensor reshapes will automatically drop when importing a UFF model into TensorRT engine.
This mechanism may lead to non-expected behavior and cause some dimension incompatible issue.

This issue has reported to the developer team and will be prioritized internally.
Thanks and sorry for the inconvenience.

Hello, now I use TRT,but still have same issue, detail as below, Is TRT already still not fix it?

UFFParser: parsing res_aspp_g/decoder/resnet/res4b1_branch2a/weights
UFFParser: parsing res_aspp_g/decoder/resnet/res4b1_branch2a/Conv2D
res_aspp_g/decoder/resnet/res4a: elementwise inputs must have same dimensions or follow the broadcasting rules
UFFParser: parsing res_aspp_g/decoder/resnet/res4b1_branch2a/biases
UFFParser: parsing res_aspp_g/decoder/resnet/res4b1_branch2a/BiasAdd
res_aspp_g/decoder/resnet/res4a_relu: at least one non-batch dimension is required for input
UFFParser: Parser error: res_aspp_g/decoder/resnet/res4b1_branch2a/BiasAdd: The input to the Scale Layer is required to have a minimum of 3 dimensions.
Failed to parse UFF



This issue is still not implemented due to internal prioritized.
We will update your requirement to our internal team.


I am using TensorRT- on Windows and the uff converter tool version is 0.5.5, I am still facing the similar issue ? Is this issue fixed or still exists ?

Note : I verified that my frozen_graph.pb file is able to run the inference correctly and uff conversion does not show any error or warning after the conversion. Error of dimensions mismatch arises only when I try to parse the uff file using NvUffParser


You can find our support matrix here:

Unfortunately, reshape layer will be discarded when creating TensorRT engine from UFF.
It’s recommended to implement it with plugin layer on your own.