How to convert tensorrt graph to tensorrt engine

I need to convert my tensorflow model to tensorrt engine. However, since there are some unsupported layers, I decided to use create_inference_graph to optimize my model. Although it works fine with Python, I need to write it to *. engine file so that I can read it in C++. I tried to convert it to uff model using uff_model_from tensorflow, but got key error. Is there a way I can convert the optimized graph to an engine? Any suggestion would be appreciated.

Hello, which version of TRT are you using? and what error do you see when you convert it to uff model?

It’d help us debug if you can provide source file you are using to convert.

OS : Ubuntu 16.04
Python : 3.5
tensorflow : 1.11
TensorRT 4.0.1.6

def load_graph(file):
    with tf.gfile.GFile(file, 'rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
    with tf.Graph().as_default() as graph:
        tf.import_graph_def(graph_def)
    return graph, graph_def

graph = load_graph('tensorflow_model.pb')
tensorrt_graph = trt.create_inference_graph(graph_def, outputs=['output_node'], max_batch_size=1, precision_mode='FP32', max_workspace_size_bytes=1<<33)

with tf.gfile.GFile('tensorrt_model.pb', 'wb') as f:
    f.write(tensorrt_graph.SerializeToString())

loaded_tensorrt_graph, loaded_tensorrt_graph_def = load_graph('tensorrt_model.pb')
uff_model = uff.from_tensorflow_frozen_model(loaded_tensorrt_graph_def)

Traceback (most recent call last):
File “”, line 1, in
File “/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 149, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File “/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 120, in from_tensorflow
name=“main”)
File “/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py”, line 76, in convert_tf2uff_graph
uff_graph, input_replacements)
File “/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py”, line 63, in convert_tf2uff_node
op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes)
File “/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py”, line 38, in convert_layer
fields = cls.parse_tf_attrs(tf_node.attr)
File “/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py”, line 209, in parse_tf_attrs
for key, val in attrs.items()}
File “/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py”, line 209, in
for key, val in attrs.items()}
File “/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py”, line 204, in parse_tf_attr_value
return cls.convert_tf2uff_field(code, val)
File “/usr/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py”, line 189, in convert_tf2uff_field
‘type’: ‘dtype’, ‘list’: ‘list’}
KeyError: ‘shape’

Hello,

It’d be useful if you can provide the full source (including the import statements) and the .pb files to accurately repro the issues are you seeing.

regards,
NVIDIA Enterprise Support

Hi, before uploading the file, I’d like to know if the tensorrt graph created by calling create_inference_graph is supposed to be convertible to tensorrt engine even if there are unsupported operations by tensorrt?

Hello,

Unsupported ops in your network won’t be successfully converted. For those layers, you’d need TensorRT plugin API.

Assuming your network is implemented with tensorflow, the workflow is to convert the model to UFF format and implement the non-supported layer via Plugin API.