Hi, I have been struggling to understand how cudnnSoftmaxBackward works. I found the documentation to be unclear and I would like to confirm how I see cudnnSoftmaxBackward working. Is it possible to improve the documentation to clarify these points? For completeness, I am using the cuDNN API doc: API Reference :: NVIDIA Deep Learning cuDNN Documentation and cuDNN version 8.0.5.39.
According to the API doc, among other arguments, cudnnSoftmaxBackward takes 3 tensors: yData and dy as inputs, and dx as output. All 3 of the tensors must have the same dimensions.
The first clarification: The documentation for cudnnSoftmaxBackward states: “This routine computes the gradient of the softmax function.” I find this already to be misleading as, mathematically, the softmax function returns a vector, so its gradient should have a higher dimension. Upon investigation, it seems that cudnnSoftmaxBackward does not really return the gradient, but rather applies the gradient to the input dy, and returns that computation as dx.
The second clarification: To correctly get the gradient of the softmax function (applied to dy), I have to first preprocess the input tensor y by using cudnnSoftmaxForward, and then provide the result as input to cudnnSoftmaxBackward. This is NOT stated anywhere in the docs.
You can check this easily with y and dy as 4D tensors with n=c=h=w=1. The output tensor dx should be:
dx = (1-softmax(y)) * softmax(y) * dy
but instead it is
dx = (1-y) * y * dy