UFFParser: Reshape: Volume mismatch

Hi, i encounter an error while parsing uff file.
The model (before convert to uff file) works normally and there is no error message while converting to uff file until parse it to TensorRT.

here is the parse log

INFO: UFFParser: parsing lpr_net/conv7/LeakyRelu/alpha
INFO: UFFParser: parsing lpr_net/conv6/LeakyRelu/alpha
INFO: UFFParser: parsing lpr_net/conv5/LeakyRelu/alpha
INFO: UFFParser: parsing lpr_net/conv4/LeakyRelu/alpha
INFO: UFFParser: parsing lpr_net/conv3/LeakyRelu/alpha
INFO: UFFParser: parsing lpr_net/conv2/LeakyRelu/alpha
INFO: UFFParser: parsing lpr_net/conv1/LeakyRelu/alpha
INFO: UFFParser: parsing InputImage
INFO: UFFParser: parsing lpr_net/conv1/conv2d/kernel
INFO: UFFParser: parsing lpr_net/conv1/conv2d/Conv2D
INFO: UFFParser: parsing lpr_net/conv1/conv2d/bias
INFO: UFFParser: parsing lpr_net/conv1/conv2d/BiasAdd
INFO: UFFParser: parsing lpr_net/conv1/LeakyRelu/mul
INFO: UFFParser: parsing lpr_net/conv1/LeakyRelu
INFO: UFFParser: parsing lpr_net/maxpool1/max_pooling2d/MaxPool
INFO: UFFParser: parsing lpr_net/conv2/conv2d/kernel
INFO: UFFParser: parsing lpr_net/conv2/conv2d/Conv2D
INFO: UFFParser: parsing lpr_net/conv2/conv2d/bias
INFO: UFFParser: parsing lpr_net/conv2/conv2d/BiasAdd
INFO: UFFParser: parsing lpr_net/conv2/LeakyRelu/mul
INFO: UFFParser: parsing lpr_net/conv2/LeakyRelu
INFO: UFFParser: parsing lpr_net/maxpool2/max_pooling2d/MaxPool
INFO: UFFParser: parsing lpr_net/conv3/conv2d/kernel
INFO: UFFParser: parsing lpr_net/conv3/conv2d/Conv2D
INFO: UFFParser: parsing lpr_net/conv3/conv2d/bias
INFO: UFFParser: parsing lpr_net/conv3/conv2d/BiasAdd
INFO: UFFParser: parsing lpr_net/conv3/LeakyRelu/mul
INFO: UFFParser: parsing lpr_net/conv3/LeakyRelu
INFO: UFFParser: parsing lpr_net/conv4/conv2d/kernel
INFO: UFFParser: parsing lpr_net/conv4/conv2d/Conv2D
INFO: UFFParser: parsing lpr_net/conv4/conv2d/bias
INFO: UFFParser: parsing lpr_net/conv4/conv2d/BiasAdd
INFO: UFFParser: parsing lpr_net/conv4/LeakyRelu/mul
INFO: UFFParser: parsing lpr_net/conv4/LeakyRelu
INFO: UFFParser: parsing lpr_net/maxpool3/max_pooling2d/MaxPool
INFO: UFFParser: parsing lpr_net/conv5/conv2d/kernel
INFO: UFFParser: parsing lpr_net/conv5/conv2d/Conv2D
INFO: UFFParser: parsing lpr_net/conv5/conv2d/bias
INFO: UFFParser: parsing lpr_net/conv5/conv2d/BiasAdd
INFO: UFFParser: parsing lpr_net/conv5/LeakyRelu/mul
INFO: UFFParser: parsing lpr_net/conv5/LeakyRelu
INFO: UFFParser: parsing lpr_net/conv6/conv2d/kernel
INFO: UFFParser: parsing lpr_net/conv6/conv2d/Conv2D
INFO: UFFParser: parsing lpr_net/conv6/conv2d/bias
INFO: UFFParser: parsing lpr_net/conv6/conv2d/BiasAdd
INFO: UFFParser: parsing lpr_net/conv6/LeakyRelu/mul
INFO: UFFParser: parsing lpr_net/conv6/LeakyRelu
INFO: UFFParser: parsing lpr_net/maxpool4/max_pooling2d/MaxPool
INFO: UFFParser: parsing lpr_net/conv7/conv2d/kernel
INFO: UFFParser: parsing lpr_net/conv7/conv2d/Conv2D
INFO: UFFParser: parsing lpr_net/conv7/conv2d/bias
INFO: UFFParser: parsing lpr_net/conv7/conv2d/BiasAdd
INFO: UFFParser: parsing lpr_net/conv7/LeakyRelu/mul
INFO: UFFParser: parsing lpr_net/conv7/LeakyRelu
INFO: UFFParser: parsing lpr_net/transpose
INFO: UFFParser: parsing lpr_net/Reshape/shape
INFO: UFFParser: parsing lpr_net/Reshape
INFO: UFFParser: parsing lpr_net/temporal_block_0/batch_norm_conv2d_0/Pad/paddings
INFO: UFFParser: parsing lpr_net/temporal_block_0/batch_norm_conv2d_0/Pad
INFO: UFFParser: parsing lpr_net/temporal_block_0/batch_norm_conv2d_0/conv2d/kernel
INFO: UFFParser: parsing lpr_net/temporal_block_0/batch_norm_conv2d_0/conv2d/Conv2D
INFO: UFFParser: parsing lpr_net/temporal_block_0/batch_norm_conv2d_0/conv2d/bias
INFO: UFFParser: parsing lpr_net/temporal_block_0/batch_norm_conv2d_0/conv2d/BiasAdd
INFO: UFFParser: parsing lpr_net/temporal_block_0/batch_norm_conv2d_0/Relu
INFO: UFFParser: parsing lpr_net/temporal_block_0/attention_block_0/dense_1/Tensordot/transpose
INFO: UFFParser: parsing lpr_net/temporal_block_0/attention_block_0/dense_1/Tensordot/Reshape/shape
INFO: UFFParser: parsing lpr_net/temporal_block_0/attention_block_0/dense_1/Tensordot/Reshape
ERROR: UFFParser: Parser error: lpr_net/temporal_block_0/attention_block_0/dense_1/Tensordot/Reshape: Reshape: Volume mismatch
ERROR: Fail to parse

Thanks for any response.

Hi,

May I know the input format of your model? NCHW or NHWC?

It’s recommended to use NCHW data since our implementation targets for it.
And we keep finding some issue when converting the different input data format.

Thanks.

@AastaLLL, Thanks for the reply

This is the setting of my py script for converting model, i already set the format of input placeholder to NCHW.

Input = gs.create_node("InputImage",
    op="Placeholder",
    dtype=tf.float32,
    shape=[1, 1, 64, 128])

namespace_plugin_map = {
        "image_placeholder": Input,
         # BatchMatMul to MatMul
         ...
}

Hi @AastaLLL

i have done some test, i found if the dimension of node is 2 will raise the reshape volume mismatch

For example,

The dimension after lpr_net/temporal_block_0/attention_block_0/dense_1/Tensordot/Reshape is Nx64.

and i retrain a model without attention_block_0 then it still have reshape volume mismatch error at lpr_net/Reshape_1, the dimension after lpr_net/Reshape_1 is Nx64.

It looks like it will cause volume mismatch if the dimension is 2 after reshape.

Hi,

There are two phase with an TensorFlow model:
- TF->UFF (called uff parser)
- Uff->TRT (called TensorRT engine)

Reshape layer is implemented with Shuffle layer in TensorRT.
And we do support the input/output dimension across 0-7:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#shuffle-layer

Another possible cause is from uff parser so I write a simple example to reproduce this issue:

import tensorflow as tf
import tensorrt as trt
import uff

sess = tf.Session()
inputs = tf.placeholder(tf.float32, [16, 3, 64, 64], name="inputs")
output = tf.reshape(inputs, [16,3*64*64])
output = tf.reshape(output, [16*3*64*64])
output = tf.nn.relu(inputs,name="output")
uff_model = uff.from_tensorflow(sess.graph_def, ['output'], output_filename="tmp.uff", text=True)
TRT_LOGGER = trt.Logger(trt.Logger.INFO)

builder = trt.Builder(TRT_LOGGER)
network = builder.create_network()
parser = trt.UffParser()

parser.register_input('inputs', (3,64,64))
parser.register_output('output')
parser.parse("tmp.uff", network)

builder.max_workspace_size = 1<<20
engine = builder.build_cuda_engine(network)

However, the sample works good in both parser and engine.
Do I miss any thing? Or the error is triggered in certain input dimension?
Would you mind to do some further investigation for finding the broken layer/condition?

Thanks.

Hi, @AastaLLL, Thanks for doing this.

After review my model architecture, it seems like TensorFlow create a complex Fully connected layer according to my code.

i rewrite this part to make it more straightforward and convert it to UFF file → run TensorRT parser again, then the reshape issue is gone but i get another issue.

here is the parse log of the new issue

INFO: UFFParser: parsing attention_1/matmul_1
INFO: UFFParser: parsing lpr_net/temporal_block_0/attention_block_1/Reshape_1/shape
INFO: UFFParser: parsing lpr_net/temporal_block_0/attention_block_1/Reshape_1
INFO: UFFParser: parsing lpr_net/temporal_block_0/conv2d/kernel
INFO: UFFParser: parsing lpr_net/temporal_block_0/conv2d/Conv2D
INFO: UFFParser: parsing lpr_net/temporal_block_0/conv2d/bias
INFO: UFFParser: parsing lpr_net/temporal_block_0/conv2d/BiasAdd
INFO: UFFParser: parsing lpr_net/temporal_block_0/add
INFO: UFFParser: parsing lpr_net/temporal_block_0/Relu
INFO: UFFParser: parsing lpr_net/temporal_block_1/batch_norm_conv2d_2/Pad/paddings
INFO: UFFParser: parsing lpr_net/temporal_block_1/batch_norm_conv2d_2/Pad
ERROR: lpr_net/temporal_block_0/add: elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [1,32,64] and [64,1,32])

the shape of both inputs of temporal_block_0/add are the same when i check tensorboard.

[data](Nx1x32x64) ---> [ops... reshape to (Nx1x32x64)] --->(Nx1x32x64) --> [temporal_block_0/add] --->
    |                                                                                      ^
    |                                                                                      | (Nx1x32x64)
     ---------------------------------------------------------------------------------------

But according to the parse log, it looks like when pb is converted to uff file, the shape(NHWC=Nx1x32x64) at upper path does not convert to NCHW (Nx64x1x32) and the shape of bottom path is converted to NCHW(Nx64x1x32)

Do you have any idea about this ?

Thanks.

Hi @AastaLLL

As i mention above, the dimension of inputs of Add operation are different (one is Nx1x32xx64, another is Nx64x1x32) after tf model is converted to uff file.

I am trying to write a custom ADD operation which will reshape one input to the correct shape then add with another input to replace original ADD operation.

Below is the pseudo code of my custom ADD operation,

int Plugin::enqueue(int batchSize, const void* const* inputs, void** outputs, cudaStream_t stream) 
{
    // Does this line of code correct to get the first and second input of Add operation
    void* first_input = inputs[0];
    void* second_input = inputs[1];

    // Add first_input and second_input according to second_shape 
    reshape_one_then_add_another(first_input, first_shape, second_input, second_shape);
}

My question are how do i get corresponding shape of each input of original Add operation and hod do i get first and second input of Add operation separately.

Thanks

Hi,

It looks like one of tensor is updated from NCHW to NHWC.

Could you help to profile the reshape output.
Not sure if there is any hidden issue within the reshape layer:

INFO: UFFParser: parsing lpr_net/temporal_block_0/attention_block_1/Reshape_1/shape
INFO: UFFParser: parsing lpr_net/temporal_block_0/attention_block_1/Reshape_1

Thanks.

Hi AastaLLL, good to receive your response

i have tried so many solutions and at the end, i modify my model with data format is NCHW and retrain again.

But even i change to NCHW, there is a shape issue.

For example, if my layer(TensorFlow) is like

x=[N, 32, 64]
d1 = tf.layers.dense(x, units=64) #kernel=[64,64]
d1=[N, 32, 64]

the code above is workable in TensorFlow v1.10, the kernel=[64, 64] will be extended to [N, 64, 64] for doing BatchMatMul with x.
but it will cause Reshape: Volume mismatch when building TensorRT model.

if the layer is as below, there is no Volume mismatch issue.

x=[N, 2048]
d1 = tf.layers.dense(x, units=64) #kernel=[2048, 64]
d1=[N, 64]

It looks like TensorRT does not support dot product with different dimension ?

Thanks.

Hi,

Thanks for your update.

TensorRT may not be able to cover all the use case since the TensorFlow operation is too flexible to control.
We will try your use case and update more information with you later.

Thanks.

Hi,

You can check our support matrix here:
[url]https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html#layers-matrix[/url]

MatrixMultiply require input/output to be 2 or more dimensions.
Thanks

so does TensorRT support [N, 32, 64] dot [64, 128] = [N, 32, 128]? ( N is batch size )

For my test of TensorRT,

[N, 32, 64] -> dense_layer(kernel=[64,  128]) => [N, 32, 64] dot [64, 128]  = [N, 32, 128] // Reshape: Volume mismatch 

[N, 256]    -> dense_layer(kernel=[256, 128]) => [N, 256]    dot [256, 128] = [N, 128]     // success

Thanks.

Hi,

Not sure if this is related to the TensorFlow operation you choose.
Would you mind to share an example with us to reproduce this issue?

Thanks.

sure, i will put a small project on github for your reference.

Thanks for paying attention on these problems.

Hi,

Is the sample ready now?
Thanks.