ERROR: UFFParser: Parser error: rpn_model/reshape_1/Reshape: Reshape: -1 dimension specified more th...

Hi, recently, I run Maskrcnn with tensorrt4, I can convert maskrcnn keras h5 file to uff file.But when I run

auto parsed = parser->parse(...)

there are some errors,

ERROR: UFFParser: Parser error: rpn_model/reshape_1/Reshape: Reshape: -1 dimension specified more than 1 time

Can anybody help me?

GPU 1080Ti
Cuda-9.0
cudnn 7.1
tensorrt4
uff 0.4.0
tensorflow-gpu 1.12.0
keras 2.1.3
Ubuntu 16.04
@SunilJB @ @ @ @

Hi,

It seems reshape operation is not satisfying the below condition.
“-1 specifies that the dimension should be automatically deduced - this can only be used at most once in any given shape.”
Please refer below link for more details:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/uff/Operators.html#reshape

Also, I would recommend to use latest TRT 7 release.

Thanks

Hi SunilJB,
Thanks for your reply. Unfortunately,I can only use tensorrt4 now, But now I know the true reason.

- -1 specifies that the dimension should be automatically deduced - this can only be used at most once in any given shape.

does that mean with different inputs ,the same tensor only specifies -1 once?
for example,

def rpn_graph(feature_map, anchors_per_location, anchor_stride):
    shared = KL.Conv2D(512, (3, 3), padding='same', activation='relu',
        strides=anchor_stride, name='rpn_conv_shared')(feature_map)
    x = KL.Conv2D(2 * anchors_per_location, (1, 1), padding='valid',
        activation='linear', name='rpn_class_raw')(shared)
    x = KL.Permute((2,3,1))(x)

    rpn_class_logits = KL.Reshape((-1, 2))(x)      ######replace -1

    rpn_probs = KL.Activation(
        "softmax", name="rpn_class_xxx")(rpn_class_logits)
    x = KL.Conv2D(anchors_per_location * 4, (1, 1), padding="valid",
	activation='linear', name='rpn_bbox_pred')(shared)
    x = KL.Permute((2,3,1))(x)

    rpn_bbox = KL.Reshape((-1, 4))(x)   ######replace -1

    return [rpn_class_logits, rpn_probs, rpn_bbox]
def build_rpn_model(anchor_stride, anchors_per_location, depth):
    input_feature_map = KL.Input(shape=[depth, None, None], ######remove None
	                         name="input_rpn_feature_map")
    outputs = rpn_graph(input_feature_map, anchors_per_location, anchor_stride)
    return KM.Model([input_feature_map], outputs[], name="rpn_model")

class XXXX():
    rpn_feature_maps = [P2, P3, P4, P5, P6]
    rpn = build_rpn_model(config.RPN_ANCHOR_STRIDE,
        len(config.RPN_ANCHOR_RATIOS), config.TOP_DOWN_PYRAMID_SIZE)
    layer_outputs = []  # list of lists
    for p in rpn_feature_maps:
        layer_outputs.append(rpn([p]))

with different size input_feature_map(p2~p6) in for loop(line 30),we can only use reshape(-1,) once in rpn_graph reshape(line8 and line16)? So do you think it a good idea to build 5 rpn_model with different name(rpn_model1,rpn_model2…) and give them different input_feature_map with fixed number in reshape? And any other suggestion would be appreciated.
Thanks a lot.

Ok, thanks @SunilJB,I have solved that problem, but another error occured.

ERROR: UFFParser: Parser error: rpn_model_2/rpn_class_xxx/Exp: Unary not supported for other non-constant node

Any suggestions?

Hi,

I think it might be due to below constrain:
“The output of a unary layer with a Constant input is treated as a Constant, and therefore will not work with layers expecting a Tensor input.”

https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/uff/Operators.html#unary

Thanks

Hi SunilJB,
Thanks for your reply. I fixed that now, Ahh ,maybe not so perfect, keras softmax layer leads to the error, so i replace softmax with sigmoid. the error disappeared.
Then another error occured, But I think that it’s the last one,

ERROR: UFFParser: Parser error: mrcnn_mask_deconv/conv2d_transpose: Output shape of UFF ConvTranspose is wrong

keras Conv2DTranspose.

I know that it’s a known error in TensorRT4, But I still want to find a way to fix that in TensorRT4. Any suggestions would be appreciated

Hi,

Other than upgrading TRT version, I am not aware of any other way to fix this issue.

Thanks

Hi SunilJB
It’s wired, someone uses slim.conv2d_transpose in the network, everything is ok, But I use tf.conv2d_transpose, bug occurs.Thanks anyway.