ConvNet Batch Normalization with cudNN

Hi,
I’m trying to write my own batch normalization layer and have a few questions about the correct implementation with cudNN.

The first: how are the batch norm scale and bias terms (gamma and beta) updated with each epoch? The backward pass outputs resultBnScaleDiff and resultBnBiasDiff, but not sure how these differential results are applied to learning the bnScale and bnBias matrices. Simple addition/subtraction does not seem to work.

The second questions pertain to implementing the batch norm layer in “inference” mode. How are the saved mean and inverse variance data, found in the forward pass, used to speed up the backward pass? If these values are correctly input, why is also needed the original un-normalized X data? Should the bnScale and bnBias matrices always be 1’s and 0’s, respectively, when testing?

Thanks for any help.