uff_to_trt does not convert BatchNorm op

Hi,
Trying to convert my model to run on Jetson TX2…
I built my tensorflow model in NCHW (which runs fine on TF) like this:

The Input of data is (448,576)

self.feed('data')
             conv2d(kernel_h=3, kernel_w=3, 32, stride_h=2,stride_w= 2, biased=False, padding='SAME', relu=False, name='conv1_1_3x3_s2')# 1/2
             batch_normalization(relu=True, name='conv1_1_3x3_s2_bn')
             conv2d(kernel_h=3, kernel_w=3, 32, stride_h=1, stride_w=1, biased=False, padding='SAME', relu=False, name='conv1_2_3x3')
             batch_normalization(relu=True, name='conv1_2_3x3_bn')
.....

Then I follow the steps to convert it to TRT:

output_graph_def = tf.graph_util.convert_variables_to_constants(
                                    self.sess,
                                    tf.get_default_graph().as_graph_def(),
                                    self.output_layers)
        
        frozen = tf.graph_util.remove_training_nodes(output_graph_def)
        uff_model = uff.from_tensorflow(frozen, self.output_layers)
        G_LOGGER = trt.infer.ConsoleLogger(trt.infer.LogSeverity.INFO)
        MAX_WORKSPACE = 1 << 30

        parser = uffparser.create_uff_parser()
        parser.register_input("x_placeholder", (1,448,576), 0)
        parser.register_input("y_placeholder", (1,448,576), 0)
        parser.register_output("output_layer_name")
        engine = trt.utils.uff_to_trt_engine(G_LOGGER,
                                             uff_model,
                                             parser,
                                             1,
                                             MAX_WORKSPACE, datatype=trt.infer.DataType.FLOAT)

        assert (engine)

The uff parsing passes fine:

Converting to UFF graph
No. nodes: 548

but transformig to TRT gets the following error:

[TensorRT] INFO: UFFParser: parsing output_layer_name
[TensorRT] INFO: UFFParser: parsing x_placeholder
[TensorRT] INFO: UFFParser: parsing y_placeholder
[TensorRT] INFO: UFFParser: parsing concat
[TensorRT] INFO: UFFParser: parsing conv1_1_3x3_s2/weights
[TensorRT] INFO: UFFParser: parsing conv1_1_3x3_s2/Conv2D
[TensorRT] INFO: UFFParser: Convolution: add Padding Layer to support asymmetric padding
[TensorRT] INFO: UFFParser: Convolution: Left: 0
[TensorRT] INFO: UFFParser: Convolution: Right: 1
[TensorRT] INFO: UFFParser: Convolution: Top: 0
[TensorRT] INFO: UFFParser: Convolution: Bottom: 1
[TensorRT] INFO: UFFParser: parsing conv1_1_3x3_s2_bn/gamma
[TensorRT] INFO: UFFParser: parsing conv1_1_3x3_s2_bn/beta
[TensorRT] INFO: UFFParser: parsing conv1_1_3x3_s2_bn/moving_mean
[TensorRT] INFO: UFFParser: parsing conv1_1_3x3_s2_bn/moving_variance
[TensorRT] INFO: UFFParser: parsing conv1_1_3x3_s2_bn/FusedBatchNorm
[TensorRT] ERROR: UFFParser: Parser error: conv1_1_3x3_s2_bn/FusedBatchNorm: Invalid scale mode, nbWeights: 288
[TensorRT] ERROR: Failed to parse UFF model stream

Does TensorRT support batch norm? what is Invalid scale mode, nbWeights: 288?
Why 288 which is half of width input?

I’m using TensorRT 4, cuda 9 and Cudnn 7.1.3
Thanks