I have been exploring the cuDNN library. I am able to create a simple neural network with one convolutional layer and one activation layer. I can propagate input forward through this simple network and i am now looking to backpropagate the error through the network and updating the weights. I am able to propagate the difference and compute the bias gradient and filter (weights) gradient.
However i wonder how i should update the weights using the gradient i.e. w += -alpha*w_gradient. For the bias i used the cudnnAddTensor4d function with CUDNN_ADD_SAME_C. This function allows me to set an alpha (weighting factor).
However in order to update the filter weights i can’t easily use this function as the filter weights are described by the type cudnnFilterDescriptor_t not cudnnTensor4dDescriptor_t.
I could hack this in, creating a cudnnTensor4dDescriptor_t for the filter weights. However i wonder if there is a better way of going about this (something i am missing).