BatchNormalization on Tensorflow 0.8

On my pc, I’ve keras 1.2 and tf 1.14.0 . I can run below code that is Batch normalization on keras

self.gl_inp = keras.layers.BatchNormalization(name=‘norm_2’)(self.gl_inp)

But on Tegra K1(tf 0.8 and keras 1.2), I am getting below error on batch normalization step;

“moments() got an unexpected argument shift”

I also tried tf.layers.BatchNormalization and tf.nn.BatchNormalization ;

But on that I get below error;

“AttributeError: ‘tuple’ object has no attribute ‘layer’”

On summary, how to use BatchNormalization on TF 0.8

Hi,

I check the source code of TensorFlow 0.8.
It do have batch normalization at the time.

You could apply this function for your use case:
https://github.com/tensorflow/tensorflow/blob/r0.8/tensorflow/python/ops/nn.py#L712

tf.nn.batch_normalization

Thanks.

Hi,

Thanks for reply.

Now, I am getting this error on

self.model = Model(self.input_image, self.gl_inp)

when I tried tf.nn.batch_normalization;

TypeError: Output tensors to a Model must be Keras tensors. Found: Tensor("sub_7:0", shape=(?, 13, 13, 128), dtype=float32)

Hi,

It’s type error. That means you cannot use the float32 data type.
It’s recommended to check the document in the GitHub for the support format first.

Thanks.