Unable to convert Mask-RCNN (.pb or .h5 format) model to UFF

I use the current setup:


  • i7 based Acer system
  • NVIDIA Jetson Xavier (target)


  • Ubuntu 18.04
  • TensorRT v7.2.2
  • CUDA Version 10.2.89
  • cuDNN v8.1.1
  • GNU make >= v4.1
  • cmake >= v3.13
  • Python 3.6.5
  • Uff 0.6.9
  • graphsurgeon

Attempt 1

I followed the steps given in sampleUffMaskRCNN README and went on modifying the conv2d_transpose function in /usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter_functions.py. I also applied the 0001-Update-the-Mask_RCNN-model-from-NHWC-to-NCHW.patch patch.

For test, I used the same sample model provided in the instructions. In short, I replicated all the steps.

The config.py and model.py I used, have been attached below config.py (9.2 KB) model.py (124.6 KB)

I get the error ending with:

Converting to UFF graph
Traceback (most recent call last):
  File "mrcnn_to_trt_single.py", line 164, in <module>
  File "mrcnn_to_trt_single.py", line 123, in main
    text=True, list_nodes=list_nodes)
  File "mrcnn_to_trt_single.py", line 157, in convert_model
    debug_mode = False
  File "/home/hotify/trt_sample_try/trt_sample/lib/python3.6/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 276, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
  File "/home/hotify/trt_sample_try/trt_sample/lib/python3.6/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 225, in from_tensorflow
  File "/home/hotify/trt_sample_try/trt_sample/lib/python3.6/site-packages/uff/converters/tensorflow/converter.py", line 141, in convert_tf2uff_graph
    uff_graph, input_replacements, debug_mode=debug_mode)
  File "/home/hotify/trt_sample_try/trt_sample/lib/python3.6/site-packages/uff/converters/tensorflow/converter.py", line 126, in convert_tf2uff_node
    op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes, debug_mode=debug_mode)
  File "/home/hotify/trt_sample_try/trt_sample/lib/python3.6/site-packages/uff/converters/tensorflow/converter.py", line 94, in convert_layer
    return cls.registry_[op](name, tf_node, inputs, uff_graph, **kwargs)
  File "/home/hotify/trt_sample_try/trt_sample/lib/python3.6/site-packages/uff/converters/tensorflow/converter_functions.py", line 87, in convert_add
    uff_graph.binary(inputs[0], inputs[1], 'add', name)
IndexError: list index out of range

The entire logs are here:log1.txt (92.7 KB)

On running the same script but with my model (also based on ResNet101 like the sample model), I get the error ending with:

Traceback (most recent call last):
  File "mrcnn_to_trt_single.py", line 164, in <module>
  File "mrcnn_to_trt_single.py", line 115, in main
    model.load_weights(model_weights_path, by_name=True)
  File "/home/hotify/trt_sample_try/trt_sample/lib/python3.6/site-packages/keras/engine/topology.py", line 2643, in load_weights
    f, self.layers, skip_mismatch=skip_mismatch)
  File "/home/hotify/trt_sample_try/trt_sample/lib/python3.6/site-packages/keras/engine/topology.py", line 3248, in load_weights_from_hdf5_group_by_name
  File "/home/hotify/trt_sample_try/trt_sample/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 2365, in batch_set_value
    assign_op = x.assign(assign_placeholder)
  File "/home/hotify/trt_sample_try/trt_sample/lib/python3.6/site-packages/tensorflow_core/python/ops/variables.py", line 2067, in assign
    self._variable, value, use_locking=use_locking, name=name)
  File "/home/hotify/trt_sample_try/trt_sample/lib/python3.6/site-packages/tensorflow_core/python/ops/state_ops.py", line 227, in assign
  File "/home/hotify/trt_sample_try/trt_sample/lib/python3.6/site-packages/tensorflow_core/python/ops/gen_state_ops.py", line 66, in assign
    use_locking=use_locking, name=name)
  File "/home/hotify/trt_sample_try/trt_sample/lib/python3.6/site-packages/tensorflow_core/python/framework/op_def_library.py", line 794, in _apply_op_helper
  File "/home/hotify/trt_sample_try/trt_sample/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/home/hotify/trt_sample_try/trt_sample/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3357, in create_op
    attrs, op_def, compute_device)
  File "/home/hotify/trt_sample_try/trt_sample/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 3426, in _create_op_internal
  File "/home/hotify/trt_sample_try/trt_sample/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1770, in __init__
  File "/home/hotify/trt_sample_try/trt_sample/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1610, in _create_c_op
    raise ValueError(str(e))
ValueError: Dimension 1 in both shapes must be equal, but are 324 and 16. Shapes are [1024,324] and [1024,16]. for 'Assign_682' (op: 'Assign') with input shapes: [1024,324], [1024,16].

The entire logs are: log3.txt (10.1 KB) .

Both models are ResNet101 and still leading to errors. I tried following the post: converting mask rcnn to tensor rt - #31 by ChrisDing but their configuration file is way different from mine. On using theirs, I get the same error.

Any help is appreciated 🙂

HI @pradan
Will check and get back to you


1 Like

Ok. I got past this error by using CUDA 10.2 + cudnn 8.1 and TensorRT

Can you please guide me how should I visualise the results of MaskRCNN UFF predictions ? I mean, is there a way to save the mask ?

you can refer to GitHub - NVIDIA-AI-IOT/deepstream_4.x_apps: deepstream 4.x samples to deploy TLT training models to save the mask, but we don’t have existing OSD plugin that can draw the mask rcnn mask on frame

The deepstream repo recommends TensorRT version to be 5 or 6 but I have converted the h5 to UFF using version 7.0. How is that justified ? Also, I feel that the documentation on deepstream is poor.

What methods do I have that do not need deepstream ?


We have mask-rcnn samples


you can refer to the PeopleSegNet sample in GitHub - NVIDIA-AI-IOT/deepstream_tlt_apps at release/tlt3.0

// PeopleSegNet

Still no answer to this…

Does it make sense, to now use TensorRT 5 or 6 after converting using TensorRT7 ?

We have uff converter in each TRT release, so there is not guarantee that the converted uff works on a different TRT version.
And, uff is deprecating, we recommned to use ONNX.

[We have deprecated the Caffe Parser and UFF Parser in TensorRT 7.0. They are still tested and functional in TensorRT 8.0, however, we plan to remove the support in the future. Ensure you migrate your workflow to use tf2onnx, keras2onnx or TensorFlow-TensorRT (TF-TRT) for deployment](Release Notes :: NVIDIA Deep Learning TensorRT Documentation)