API Reference :: NVIDIA Deep Learning cuDNN Documentation

Referring to the documentation of pooling’s backward function here: API Reference :: NVIDIA Deep Learning cuDNN Documentation
What’s the difference between dy, dyDesc and dx, dxDesc? I thought (max) pooling backward would just return gradients for the input? Do I need to provide the gradients received from the next layer as dy and store max-pooling’s grads in dx? So, basically, will the backward function basically check which element resulted in the output and copy the gradient from dy for that element and 0 for others in the window?

Hi,

In the forward pass, x is the input and y is the output, while in the backward pass, dy is the input and dx is the output.
However, x and y are also required as the inputs in the backward pass, which is the reason there’re 3 inputs + 1 output.

Thank you.

Hi,

So, if I understand it correctly,

in the forward pass:

x - Input
y - Output

and in the backward pass:

x - Input to the backward pass which is the previously generated output i.e. y from the forward pass.
y - Previously used input i.e. x from the forward pass.
dx - Gradients received from the next layer i.e. the layer which received pooling layer’s output in the forward pass.
dy - Gradients generated by the backward pass.

Is that correct?

Yes.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.