 # Transposed Convolution

Hi,

I’m working on a project where we want to compare different implementation of DNN. One of these implementation is in cuDNN.
We need to implement a transposed convolution as the Conv2DTranspose in TensorFlow Keras.

Can we found some example or guide about implementation of the transposed convolution in cuDNN ?

Hugo

Hi @hugo.kieffer ,
Deconvolution/transposed convolution is basically the “dgrad” operation. Dgrad is designed for the backwards phase of training, so you may need to choose your filter layout accordingly. (If a forward convolution from Tensor A NCHW to Tensor C NKPQ uses a KRSC filter, then the dgrad operation would take Tensor C as input and Tensor A as ouput, but still use the KRSC filter.)
Note also that unstrided (unit strided) deconvolution is just a convolution with the filter transposed (hence the alternate name “transposed convolution”). So if it’s not a strided deconvolution, just use cudnn’s convolution operator and flip the cross correlation flag.

However i am afraid, I dont think we have a sample available for the same.

Thanks

Hugo,

Have you had any success implementing this layer in cuDNN?

Hello, I haven’t had time to rework on the subject yet, we have several parallel implementations (FPGA and GPU) and we have had work on the FPGA target.
I’ll share an example when I have a functional code.

Hi,

I reopen this thread because I rework on this implementation.
I want to add some information about the type of transposed convolution that we want to do.

In our tensorflow model we use the Conv2DTranspose like this :
Conv2DTranspose(128, (2, 2), strides=(2, 2), padding=‘same’)

The parameters are as below :

• 128 kernel of 2x2
• Stride of 2

The input of this Conv2DTranspose is obtained by the output of a Conv2D :
Conv2D(256, (3, 3), activation=‘relu’, kernel_initializer=‘he_normal’, padding=‘same’)

Full Code :
c5 = tf.keras.layers.Conv2D(256, (3, 3), activation=‘relu’, kernel_initializer=‘he_normal’, padding=‘same’)(c5)
u6 = tf.keras.layers.Conv2DTranspose(128, (2, 2), strides=(2, 2), padding=‘same’)(c5)

I think I need to use the backward convolution in the cuDNN API, but I’m not sure…

I try to implement a simple transposed convolution first, with only one kernel and small input tensor to understand the implementation but I always have CUDNN_STATUS_BAD_PARAM, and no information…

Someone know examples or tips to implement this type of transposed convolution in cuDNN ?