I discovered experimentally that it is possible to pass to cudnnBatchNormalizationForwardTraining
the same pointer for x and y, and the output will still be correct. So the result of the normalization can be written over the input, and this will not affect the correctness. Is this always the case? If yes, why is it not mentioned in the Developer Guide? To me this seems an important property to have, because it allows various memory optimizations.