(tf.Concat Problem)Order size is not matching the number dimensions of TensorRT

I wrote a network which include a concat layer, and after that , an error raised caused by Order size not matching.

add_denseblock can be used for training and generate pb file, but can not convert to tensorrt engine.

def instance_norm(x, w, h, n_features, training):
    with tf.variable_scope('instance_norm') as scope:
        _BATCH_NORM_DECAY = 0.997
        print("instance_norm x.shape: ", x.shape)
        x_reshaped = tf.reshape(x, [-1,  w*h, n_features])
        x_reshaped = tf.reshape(x, [-1, w*h, n_features//3, 3])
        #x_reshaped = tf.transpose(x_reshaped, [0, 2, 3, 1])

        y = tf.layers.batch_normalization(
            inputs=x_reshaped, axis=1,
            momentum=_BATCH_NORM_DECAY, epsilon=1e-3, center=True,
            scale=True, training=training, fused=True)
        #y = tf.transpose(y, [0, 3, 1, 2])
        y = tf.reshape(y, [-1, w*h, n_features])
        y_reshape = tf.reshape(y, [-1, w, h, n_features])
        return y_reshape




def conv2d_reflct(input,
           input_filters,
           output_filters,
           kernel,
           strides,
           mode='REFLECT'):
    '''
    add REFLECT padding and then conv2d
    '''
    with tf.variable_scope('conv') as scope:
        if kernel == 1:
            input_pad = input
        else:
            pass
            input_pad = tf.pad(tensor = input,
                               paddings = [[int(0),int(0)],
                                           [int(kernel / 2), int(kernel / 2)],
                                           [int(kernel / 2), int(kernel / 2)],
                                           [int(0),int(0)]],
                               name = 'input_pad')
        shape = [kernel, kernel, input_filters, output_filters]
        weight = tf.Variable(tf.truncated_normal(shape=shape,
                                                 mean=0.0,
                                                 stddev=0.1),
                             name = 'weight')

        return tf.nn.conv2d(input = input_pad,
                            filter = weight,
                            strides = [1, strides, strides, 1],
                            padding = "VALID",
                            name = 'conv')



def composite_function(_input, w, h, c, out_features, training, kernel_size=3):
    with tf.variable_scope("composite_function"):
        # BN
        # output = batch_norm(_input, is_training)
        # ReLU
        output = tf.nn.relu(_input)
        # convolution
        #in_features = int(output.get_shape()[-1])
        in_features = c
        output = conv2d_reflct(input=output,
                             input_filters=in_features,
                             output_filters=out_features,
                             kernel=kernel_size,
                             strides=1,
                             mode="REFLECT")
        output = instance_norm(output, w, h, out_features, training)
        # no dropout for style transfer
    return output


def bottleneck(_input, in_features, out_features, is_training=False):
    with tf.variable_scope("bottleneck"):
        # output = batch_norm(_input, is_training)
        output = tf.nn.relu(_input)
        inter_features = out_features * 4
        # in_features = int(output.get_shape()[-1])
        in_features = in_features
        print("bottleneck:shape:",output.shape)
        output = conv2d_reflct(input=output,
                             input_filters=in_features,
                             output_filters=inter_features,
                             kernel=1,
                             strides=1,
                             mode="REFLECT")
        print(output.shape)
    return output, inter_features


def add_internal_layer(_input, w, h, c, growth_rate, training):
    temp_c = c
    # call composite_function with 3x3 kernel
    print("add_internal_layer:_input:shape:", _input.shape)
    bottleneck_out, c = bottleneck(_input,
                                  in_features=c,
                                  out_features=GROWTH_RATE,
                                  is_training=training)
    print("add_internal_layer:bottleneck_out:shape:", bottleneck_out.shape)
    comp_out = composite_function(bottleneck_out, w, h, c,
                                  out_features=GROWTH_RATE,
                                  kernel_size=3,
                                  training=training)
    print("add_internal_layer:comp_out:shape:", comp_out.shape)

    output = tf.concat(axis=3, values=(_input, comp_out))
    print("add_internal_layer:output:shape:", output.shape)
    c = temp_c + GROWTH_RATE
    return output, c


def add_denseblock(_input, w, h, c, growth_rate, layers_per_block, training):
    output = _input

    for layer in range(layers_per_block):
        with tf.variable_scope("layer_%d" % layer):
            output, c = add_internal_layer(output, w, h, c, growth_rate,
                                          training=training)
    return output

[TensorRT] INFO: UFFParser: parsing x
[TensorRT] INFO: UFFParser: parsing generator/dmux1/dmux/Reshape/shape
[TensorRT] INFO: UFFParser: parsing generator/dmux1/dmux/Reshape
[TensorRT] INFO: UFFParser: parsing generator/dmux1/dmux/transpose
[TensorRT] INFO: UFFParser: parsing generator/dmux1/dmux/Reshape_1/shape
[TensorRT] INFO: UFFParser: parsing generator/dmux1/dmux/Reshape_1
[TensorRT] INFO: UFFParser: parsing generator/dmux1/dmux/transpose_1
[TensorRT] INFO: UFFParser: parsing generator/dmux1/instance_norm/Reshape/shape
[TensorRT] INFO: UFFParser: parsing generator/dmux1/instance_norm/Reshape
[TensorRT] INFO: UFFParser: parsing generator/dmux1/instance_norm/batch_normalization/gamma
[TensorRT] INFO: UFFParser: parsing generator/dmux1/instance_norm/batch_normalization/beta
[TensorRT] INFO: UFFParser: parsing generator/dmux1/instance_norm/batch_normalization/moving_mean
[TensorRT] INFO: UFFParser: parsing generator/dmux1/instance_norm/batch_normalization/moving_variance
[TensorRT] INFO: UFFParser: parsing generator/dmux1/instance_norm/batch_normalization/FusedBatchNorm
[TensorRT] INFO: UFFParser: parsing generator/dmux1/instance_norm/Reshape_1/shape
[TensorRT] INFO: UFFParser: parsing generator/dmux1/instance_norm/Reshape_1
[TensorRT] INFO: UFFParser: parsing generator/dmux1/instance_norm/Reshape_2/shape
[TensorRT] INFO: UFFParser: parsing generator/dmux1/instance_norm/Reshape_2
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_0/bottleneck/Relu
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_0/bottleneck/conv/weight
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_0/bottleneck/conv/conv
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_0/composite_function/Relu
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_0/composite_function/conv/weight
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_0/composite_function/conv/conv
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_0/composite_function/instance_norm/Reshape/shape
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_0/composite_function/instance_norm/Reshape
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_0/composite_function/instance_norm/batch_normalization/gamma
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_0/composite_function/instance_norm/batch_normalization/beta
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_0/composite_function/instance_norm/batch_normalization/moving_mean
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_0/composite_function/instance_norm/batch_normalization/moving_variance
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_0/composite_function/instance_norm/batch_normalization/FusedBatchNorm
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_0/composite_function/instance_norm/Reshape_1/shape
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_0/composite_function/instance_norm/Reshape_1
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_0/composite_function/instance_norm/Reshape_2/shape
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_0/composite_function/instance_norm/Reshape_2
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_0/concat
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_1/bottleneck/Relu
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_1/bottleneck/conv/weight
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_1/bottleneck/conv/conv
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_1/composite_function/Relu
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_1/composite_function/conv/weight
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_1/composite_function/conv/conv
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_1/composite_function/instance_norm/Reshape/shape
[TensorRT] INFO: UFFParser: parsing generator/denseblock1/layer_1/composite_function/instance_norm/Reshape
[TensorRT] ERROR: generator/denseblock1/layer_1/bottleneck/conv/conv: kernel weights has count 1152 but 576 was expected
[TensorRT] ERROR: UFFParser: Parser error: generator/denseblock1/layer_1/composite_function/instance_norm/Reshape: Order size is not matching the number dimensions of TensorRT
[TensorRT] ERROR: Failed to parse UFF model stream
File “/home/wang/.local/lib/python2.7/site-packages/tensorrt/utils/_utils.py”, line 255, in uff_to_trt_engine
assert(parser.parse(stream, network, model_datatype))
Traceback (most recent call last):
File “/home/wang/Downloads/aifilter2_tf_copy/convert.py”, line 81, in
create_and_save_inference_engine()
File “/home/wang/Downloads/aifilter2_tf_copy/convert.py”, line 60, in create_and_save_inference_engine
trt.infer.DataType.FLOAT
File “/home/wang/.local/lib/python2.7/site-packages/tensorrt/utils/_utils.py”, line 263, in uff_to_trt_engine
raise AssertionError(‘UFF parsing failed on line {} in statement {}’.format(line, text))
AssertionError: UFF parsing failed on line 255 in statement assert(parser.parse(stream, network, model_datatype))

Process finished with exit code 1


this error happened after a concat layer, can anyone give any suggestion.

concat_error.zip (1.38 MB)

Ubuntu 16.04
tensorRT 4.0
Cuda 9.0
cudnn 7.1.3
tensorflow-gpu 1.09

I add a tf.reshape(-1, w, h, c) after line 110, it could skip this error. it is make sure this error is cause by tf.concat.

However, it will raise new error if concat dimension is difference. even if at the concatenation axis.

Hello,

To help us debug, can you please share a small repro package that contains your network/pb, conversion source that exihibt the "Order size is not matching " error you are seeing?

NVES

of course, witch email box should I submit, I think this concatenation problem should be solved other wise many networks will come cross the similar problem.

You can attach repros to your posts. Please reference https://devtalk.nvidia.com/default/topic/1043356/tensorrt/attaching-files-to-forum-topics-posts/

Hello, I have uploaded my pb file. Please check it.

Hello, This issue has been reviewed by TensorRT engineering team, the error indicates the parser does not support arbitrary dimensions right now, it has to be 4 dimensions (NCHW) tensor.