ERROR: conv2d_transpose: output dimensions height must be greater than zero.

We are using the version 5.0.4.3
ERROR: MobilenetV2/mv2_3_upsample/conv2d_transpose: output dimensions height must be greater than zero.
ERROR: UFFParser: Parser error: MobilenetV2/mv2_3_upsample/BiasAdd: The input to the Scale Layer is required to have a minimum of 3 dimensions.
Error parsing model…

We ran into this problem with conv2d transpose convolutoin when we tried to do upsampling. Below is our code:
mv2_branch = tf.layers.conv2d_transpose(mv2_branch, 48, 4, strides=2, padding=‘SAME’, trainable=trainable, name=“mv2_3_upsample”)

We are using the NCHW format as input. All of the conv2d layers were converted OK but only the conv2d_tranpose layer ran into this problem.

Could someone please help ?

Hi,

May I know the input dimension of mv2_branch?
Not sure if this issue is caused by the dilated convolution.

Thanks.

The input dimension of mv2_branch is 24x24x24.
According to your document, the tranposed conv2d is supposed to be supported. We don’t know why it does not work for the uff->trt converter. We are able to generate the uff correctly but not the trt model. Could you please help? Please let me know if you need more information for us. Thanks

Hi,

I try to reproduce your issue with this sample:

inputs = tf.placeholder(tf.float32, [24, 24, 24], name="inputs")
output = tf.layers.conv2d_transpose(inputs, 48, 4, strides=2, padding='SAME', name="output")

But meet an TensorFlow error indicating the dimension is incorrect:

ValueError: Input 0 of layer output is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [24, 24, 24]

Could you check the mv2_branch dimension again? Is it a 4D tensor like [4,24,24,24]?
Thanks.

It has to be a 4D tensor, and the 1st dimension is the batch related. E.g., if the batch is 16, then the tensor is [16,24,24,24]. It usually shows as [?, 24,24,24] in pycharms.

Hi,

Could you share your model with us?
We want to reproduce this issue internally and update this to our internal team.

Thanks.