n-dimensional padding error

Hey guys!

There seems to be a serious problem with the n-dimensional padding for the nd-pooling as well as the nd-convolution algorithms.
The padded values are not zeros but (seemingly) unallocated memory contents.

Is this a known issue and is there already a fix planned?

I’m using the latest release of CUDA(v7.5.1.8) and CUDNN4 (v4.0.7).

What kind of a problem? What is the expected, what is the actual behavior? How would one reproduce your observations?

If you have done sufficient due diligence to make sure the issue isn’t due to incorrect API usage or a bug in your software, consider filing a bug report with NVIDIA (form is linked from the CUDA registered developer website), attaching a minimal, self-contained repro code.