Convertin tf.keras MobileNetV2 to UFF fails

Tensorflow version: 1.13.1
TensorRT Installation: TensorRT 5.1.5.0 GA for Ubuntu 16.04 and CUDA 10.0 tar package

Error:

Converting mobilenetv2_1.00_224/Conv_1_bn/cond/ReadVariableOp_1/Switch as custom op: Switch
Traceback (most recent call last):
  <omitted>
    uff_model = uff.from_tensorflow_frozen_model(pb_filepath)
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 233, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 181, in from_tensorflow
    debug_mode=debug_mode)
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 94, in convert_tf2uff_graph
    uff_graph, input_replacements, debug_mode=debug_mode)
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 79, in convert_tf2uff_node
    op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes, debug_mode=debug_mode)
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 41, in convert_layer
    fields = cls.parse_tf_attrs(tf_node.attr)
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 222, in parse_tf_attrs
    return {key: cls.parse_tf_attr_value(val) for key, val in attrs.items() if val is not None and val.WhichOneof('value') is not None}
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 222, in <dictcomp>
    return {key: cls.parse_tf_attr_value(val) for key, val in attrs.items() if val is not None and val.WhichOneof('value') is not None}
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 218, in parse_tf_attr_value
    return cls.convert_tf2uff_field(code, val)
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 190, in convert_tf2uff_field
    return TensorFlowToUFFConverter.convert_tf2numpy_dtype(val)
  File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 103, in convert_tf2numpy_dtype
    return tf.as_dtype(dtype).as_numpy_dtype
  File "/home/ikarakozis/.local/lib/python2.7/site-packages/tensorflow/python/framework/dtypes.py", line 129, in as_numpy_dtype
    return _TF_TO_NP[self._type_enum]
KeyError: 20

Code:

import tensorflow as tf
tf.enable_v2_behavior()   

def main():
    with tf.Session() as sess:
        model = tf.keras.applications.MobileNetV2(
            input_shape=[224, 224, 3],
            include_top=False,
            weights='imagenet')

    global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
    prediction_layer = tf.keras.layers.Dense(2, activation=act)
    model = tf.keras.Sequential(
        [model, global_average_layer, prediction_layer])

    # get inference graph
    graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, ['dense/Sigmoid'])
    graph_def = tf.graph_util.remove_training_nodes(graph_def)

    # write graph to .pb file
    pb_filepath = 'saved_model.pb'
    with open(pb_filepath, 'wb') as f:
        f.write(graph_def.SerializeToString())

    uff_model = uff.from_tensorflow_frozen_model(pb_filepath)

if __name__ == '__main__':
    main()

I encountered the same error. After digging in code around a bit, here’s what I found:
In

tensorflow/python/framework/dtypes.py

file there is a _TF_TO_NP dictionary.

This dictionary created using

tensorflow/core/framework/types_pb2.py

file, where index 20 is declared as DT_RESOURCE.

In

tensorflow/python/framework/dtypes.py

there is set:

_NUMPY_INCOMPATIBLE = frozenset(
    [
        types_pb2.DT_VARIANT,
        types_pb2.DT_VARIANT_REF,
        types_pb2.DT_RESOURCE,
        types_pb2.DT_RESOURCE_REF,
    ]
)

According to this DT_RESOURCE is missing in _TF_TO_NP declaration.

Here is a traceback of _TF_TO_NP during execution:

{
    1: <class 'numpy.float32'>,
    2: <class 'numpy.float64'>,
    3: <class 'numpy.int32'>,
    4: <class 'numpy.uint8'>,
    5: <class 'numpy.int16'>,
    6: <class 'numpy.int8'>,
    7: <class 'object'>,
    8: <class 'numpy.complex64'>,
    9: <class 'numpy.int64'>,
    10: <class 'bool'>,
    11: dtype([('qint8', 'i1')]),
    12: dtype([('quint8', 'u1')]),
    13: dtype([('qint32', '<i4')]),
    14: <class 'bfloat16'>,
    15: dtype([('qint16', '<i2')]),
    16: dtype([('quint16', '<u2')]),
    17: <class 'numpy.uint16'>,
    18: <class 'numpy.complex128'>,
    19: <class 'numpy.float16'>,
    22: <class 'numpy.uint32'>,
    23: <class 'numpy.uint64'>,
    ...
}

I tried to add DT_RESOURCE in _TF_TO_NP as np.object:

...
types_pb2.DT_BFLOAT16: _np_bfloat16,
# try
types_pb2.DT_RESOURCE: np.object,
# Ref types
types_pb2.DT_HALF_REF: np.float16,
...

But this didn’t help.

I still haven’t found any solution yet, but i’m trying.

Hi,

Sorry for the late reply.
Have you fixed this issue?

We have a tutorial for converting SSD-MobileNetV2 into TensorRT.
You can check it for some information:
[url]https://github.com/AastaNV/TRT_object_detection[/url]

Thanks.

Hi.
I’m trying to convert faster_rcnn_resnet50/101 into TensorRT and tutorial is not very helpful)
But thanks anyway.