Hey again everybody. I’m trying to wrap my head around how cuDNN works as an overall network WITH backpropagation.
Here’s my simple network:
64x64x3 Image → Convolution → Activation → TransposeConvolution → Activation → 64x64x3 Image
Think of it as a simple autoencoder. I give it an image, and it outputs an image. The code is basically:
cudnnConvolutionForward() → cudnnActivationForward() → cudnnConvolutionBackwardData() → cudnnActivationForward()
Nice right? Okay, that’s the easy part of course. What I’m struggling with is how to back propagate the error (target image - output image). This is how I think it should flow:
cudnnActivationBackward() → cudnnConvolutionForward() → cudnnActivationBackward() → cudnnConvolutionBackwardData()
What I don’t get is well, most of this. If I do cudnnConvolutionBackwardData on the forward pass, do I do cudnnConvolutionForward on the backward pass to get the gradient? IDK. What about the filter(weights) update? I see there is a cudnnConvolutionBackwardFilter function, but will that work on the filter used in the TransposeConvolution?
There is very little direction here…none actually that I’ve seen. It’s hard enough to understand how to get a forward pass set up, but backwards?..wow…IDK.
Any help or pointers would be appreciated! :)
-Chris