`tf.slim.batch_norm` or `tf.layer.batch_normalization`?

Sorry, I pose a question again, which bothers me for a long time.

In official docs https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#support_op, tensorrt supports FusedBatchNorm op, but it does not tell it belongs to which type of BatchNorm layer , tf.layer.batch_normalization or tf.slim.batch_norm layer ?

In my model, I use tf.slim.batch_norm, but when converting to UFF file, it shows error which many tensorrt users have encountered:

[TensorRT] INFO: UFFParser: parsing model_0/resnet_v1_50/conv1/batch_normalization/FusedBatchNorm
[TensorRT] ERROR: Parameter check failed at: Utils.cpp::reshapeWeights::71, condition: input.values != nullptr
[TensorRT] ERROR: UFFParser: Parser error: model_0/resnet_v1_50/conv1/batch_normalization/FusedBatchNorm: reshape weights failed!

It seems that tensorrt does not support tf.slim.batch_norm?

When I use tf.layer.batch_normalization instead of tf.slim.batch_norm, meanwhile not setting the training parameter(the default value is False), it works. But when I train a model with setting the training parameter to True, then I transfer the ckpt trained-model to pb and uff file and then parse the uff file, it still shows the above error.

So, tensorrt supports which type of BatchNorm layer?

Linux distro and version:

LSB Version:	:core-4.1-amd64:core-4.1-noarch
Distributor ID:	CentOS
Description:	CentOS Linux release 7.4.1708 (Core)
Release:	7.4.1708
Codename:	Core

other envirs:

GPU type: Tesla v100
nvidia driver version: NVIDIA-SMI 396.44
CUDA version: 9.0
CUDNN version: 7.3.0
Python version [if using python]: python2.7
TensorRT version: 5.0.2.6
gcc>5.3/lib64

Looking forward your reply.

Hello,

Neither; tf.layer.batch_normalization and tf.slim.batch_norm are both high-level wrappers that do multiple things. FusedBatchNorm is created when you pass fused=True.

Hi NVES:
I have pass fused=True to tf.layer.batch_normalization but i still get the same error;

INFO: UFFParser: parsing net/conv1/weight
INFO: UFFParser: parsing net/conv1/conv
INFO: UFFParser: parsing net/conv1/bias
INFO: UFFParser: parsing net/conv1/BiasAdd
INFO: UFFParser: parsing net/conv1/batch_normalization/gamma
INFO: UFFParser: parsing net/conv1/batch_normalization/beta
INFO: UFFParser: parsing net/conv1/batch_normalization/Const
ERROR: Parameter check failed at: Utils.cpp::reshapeWeights::70, condition: input.values != nullptr
ERROR: UFFParser: Parser error: net/conv1/batch_normalization/Const: reshape weights failed!
ERROR: Fail to parse fp32
ERROR: Network must have at least one output
ERROR: Unable to create engine

hi,have you solved the problem ? thx very much

Your answer is not helpful, could you give more details how to use batch_norm in tensorrt correctly? Thank you.

The implementation of fusebatchnorm is as follow:

def batch_norm(inputs, training, data_format):
“”“Performs a batch normalization using a standard set of parameters.”“”
# We set fused=True for a significant performance boost. See
# https://www.tensorflow.org/performance/performance_guide#common_fused_ops
# return tf.contrib.slim.batch_norm(inputs=inputs, data_format=“NCHW”, scale=True, fused=True)
return tf.layers.batch_normalization(
inputs=inputs, axis=1 if data_format == ‘channels_first’ else 3,
momentum=_BATCH_NORM_DECAY, epsilon=_BATCH_NORM_EPSILON, center=True,
scale=True, training=training, fused=True)

But error in changing uff model to tensorrt model still accur!

Good morning all,

I realise it’s been some time since this question was asked, but I thought I’d share a related problem, its solution, and, I think, the solution to this question.

I used tf.keras.layers.BatchNormalisation() in a network I trained using TensorFlow’s Keras API. I froze the network’s graph, saved it as a *.pb file, and then converted it a *.uff file. When parsing the *.uff file I received the following error:

UffParser: Parser error: batch_normalization/FusedBatchNorm: Invalid scale mode, nbWeights: 224
Could not parse UFF file

The expected shape of the layer’s input is [-1, 32, 224, 224]; however, it looked like it was normalising on the wrong axis, i.e. axis 3. Reading through the forums, Nvidia moderators have suggested that the problem associated with the batch normalisation layer is due a reshape operation. Looking at TensorFlow’s documentation for tf.keras.layers.BatchNormalisation(), it says “…with data_format=“channels_first”, set axis=1”.

Subsequently, using tf.keras.layers.BatchNormalisation(axis=1) meant that the *.uff file was parsed successfully.

In regards to this post’s problem, I think the solution is to specify the axis that the normalisation is done on. For NCHW formatted data, set axis=1.

I hope this is able to help others who may come across this post when Googling the same problem.

Best regards,

Frazer