Cannot convert a tensorflow model to UFF

Hello,

When I’m following the guide to convert a tensorflow model to UFF

res_graph = train_net(train_x, train_y, test_x, test_y, num_class, MAX_LEN, LEARNING_RATE)

# Convert a model to UFF
uff_model = uff.from_tensorflow(graphdef=res_graph,
                                    output_filename=UFF_OUTPUT_FILENAME,
                                    output_nodes=OUTPUT_NAMES,
                                    text=True)

G_LOGGER = trt.infer.ConsoleLogger(trt.infer.LogSeverity.ERROR)

parser = uffparser.create_uff_parser()
parser.register_input("input", (1, 1, 800), 0)  # [channels, height, width]
parser.register_output("layer_fc/tf_output/BiasAdd")

engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, 1, 1 << 20)
[TensorRT] ERROR: Failed to parse UFF model stream
  File "/usr/local/lib/python2.7/dist-packages/tensorrt/utils/_utils.py", line 186, in uff_to_trt_engine
    assert(parser_result)
Traceback (most recent call last):
  File "/home/yizong/PycharmProjects/OfflineModelTensorflow-20180209/tensorflow_model.py", line 270, in <module>
    engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, 1, 1 << 20)
  File "/usr/local/lib/python2.7/dist-packages/tensorrt/utils/_utils.py", line 194, in uff_to_trt_engine
    raise AssertionError('UFF parsing failed on line {} in statement {}'.format(line, text))
AssertionError: UFF parsing failed on line 186 in statement assert(parser_result)

Then I modify my code like this, save the .uff file and read the .uff file:

# Read UFF model
uff_model = open(UFF_OUTPUT_FILENAME, 'rb').read()

# generate a TensorRT engine by creating a logger for TensorRT
G_LOGGER = trt.infer.ConsoleLogger(trt.infer.LogSeverity.ERROR)

# Create a UFF parser and identify the desired input and output nodes
parser = uffparser.create_uff_parser()
parser.register_input("input", (1, 1, 800), 0)  # [channels, height, width]
parser.register_output("layer_fc/tf_output/BiasAdd")

# Pass the logger, parser, the UFF model stream, and some settings (max batch size and
# max workspace size) to a utility function that will create the engine
engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, 1, 1 << 20)

It shows the error 134 like the below:

Process finished with exit code 134 (interrupted by signal 6: SIGABRT)

I don’t know why… Thanks for help.

Hi chenyiyibupt, which network are you trying to convert?

Are you able to successfully run any of the simple examples from the documentation through the converter?

Hi @chenyiyibupt have you solved this error?
if you have resolved it please share the solution.
Error

File “/home/wang/PycharmProjects/Test_VGG_Tensorrt/create_engine.py”, line 44, in
create_and_save_inference_engine()
File “/home/wang/PycharmProjects/Test_VGG_Tensorrt/create_engine.py”, line 29, in create_and_save_inference_engine
trt.infer.DataType.FLOAT)
File “/usr/local/lib/python3.5/dist-packages/tensorrt/utils/_utils.py”, line 263, in uff_to_trt_engine
raise AssertionError(‘UFF parsing failed on line {} in statement {}’.format(line, text))
AssertionError: UFF parsing failed on line 255 in statement assert(parser.parse(stream, network, model_datatype))

Thanks

Hi,

Could you share the model you used and the converting python script with us?
Thanks.

Thanks @AastaLLL I have followed Nvidia documentation (https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/workflows/tf_to_tensorrt.html) and my task is object detection

code

import tensorflow as tf
import tensorrt as trt
from tensorrt.parsers import uffparser
import pycuda.driver as cuda
import pycuda.autoinit
import numpy as np
from random import randint # generate a random test case
from PIL import Image
from matplotlib.pyplot import imshow # To show test case
import time
import os

import keras.backend.tensorflow_backend as K
import uff
uff_model = uff.from_tensorflow('/home/cloud4/Akhtar_Vir_Env/my_new_app/mars-sma$
G_LOGGER = trt.infer.ConsoleLogger(trt.infer.LogSeverity.ERROR)
parser = uffparser.create_uff_parser()
parser.register_input(“images”, (128, 64, 3), 0)
parser.register_output(“features”)
engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, 1, 1 << 20)#this line genrate the Error

File “/usr/local/lib/python3.5/dist-packages/tensorrt/utils/_utils.py”, line 255, in uff_to_trt_engine
assert(parser.parse(stream, network, model_datatype))

AssertionError Traceback (most recent call last)
/usr/local/lib/python3.5/dist-packages/tensorrt/utils/_utils.py in uff_to_trt_engine(logger, stream, parser, max_batch_size, max_workspace_size, datatype, plugin_factory, calibrator)
254 try:
→ 255 assert(parser.parse(stream, network, model_datatype))
256 except AssertionError:

AssertionError:

During handling of the above exception, another exception occurred:

AssertionError Traceback (most recent call last)
in ()
----> 1 engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, 1, 1 << 20)

/usr/local/lib/python3.5/dist-packages/tensorrt/utils/_utils.py in uff_to_trt_engine(logger, stream, parser, max_batch_size, max_workspace_size, datatype, plugin_factory, calibrator)
261 filename, line, func, text = tb_info[-1]
262
→ 263 raise AssertionError(‘UFF parsing failed on line {} in statement {}’.format(line, text))
264
265

AssertionError: UFF parsing failed on line 255 in statement assert(parser.parse(stream, network, model_datatype))

you can see my frozen graph

And the file from which I created frozen graph

system information
Linux:16
Cuda:9.0
driver 390.87
cudnn 7.1
tensorRt:4
Python 3.5
Gpu:Gtx 1080
Tensorflow:1.7.1

Thanks

Maybe you could try optimizing your model with TF-TRT directly (skipping the step of UFF conversion).

Reference:

  1. [url]https://devtalk.nvidia.com/default/topic/1037019/jetson-tx2/tensorflow-object-detection-and-image-classification-accelerated-for-nvidia-jetson/[/url]

  2. [url]https://devtalk.nvidia.com/default/topic/1042106/how-to-train-a-custom-object-detector-and-deploy-it-onto-jtx2-with-tf-trt-tensorrt-optimized-/#5285864[/url]

Thanks @jkjung13 I am follow the (https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/workflows/tf_to_tensorrt.html) and here is my code

Code
import tensorflow as tf
import tensorrt as trt
from tensorrt.parsers import uffparser
import pycuda.driver as cuda
import pycuda.autoinit
import numpy as np
from random import randint # generate a random test case
from PIL import Image
from matplotlib.pyplot import imshow # To show test case
import time
import os
#import keras as K
import keras.backend.tensorflow_backend as K
uff_model = uff.from_tensorflow(‘/home/cloud4/Akhtar_Vir_Env/my_new_app/mars-small128.pb’, [“images”])
G_LOGGER = trt.infer.ConsoleLogger(trt.infer.LogSeverity.ERROR)

parser = uffparser.create_uff_parser()
parser.register_input(“images”, (128, 64, 3), 0)
parser.register_output(“features”)
#these above there lines return true

engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, 1, 1 << 20)
#This last line genrate this error

File “/usr/local/lib/python3.5/dist-packages/tensorrt/utils/_utils.py”, line 255, in uff_to_trt_engine
assert(parser.parse(stream, network, model_datatype))

AssertionError Traceback (most recent call last)
/usr/local/lib/python3.5/dist-packages/tensorrt/utils/_utils.py in uff_to_trt_engine(logger, stream, parser, max_batch_size, max_workspace_size, datatype, plugin_factory, calibrator)
254 try:
→ 255 assert(parser.parse(stream, network, model_datatype))
256 except AssertionError:

AssertionError:

During handling of the above exception, another exception occurred:

AssertionError Traceback (most recent call last)
in ()
----> 1 engine = trt.utils.uff_to_trt_engine(G_LOGGER, uff_model, parser, 1, 1 << 20)

/usr/local/lib/python3.5/dist-packages/tensorrt/utils/_utils.py in uff_to_trt_engine(logger, stream, parser, max_batch_size, max_workspace_size, datatype, plugin_factory, calibrator)
261 filename, line, func, text = tb_info[-1]
262
→ 263 raise AssertionError(‘UFF parsing failed on line {} in statement {}’.format(line, text))
264
265

AssertionError: UFF parsing failed on line 255 in statement assert(parser.parse(stream, network, model_datatype))

Thanks

Hi,

Please use the output op name(‘features’ ?) to convert an UFF file.

For example:

uff_model = uff.from_tensorflow('/home/cloud4/Akhtar_Vir_Env/my_new_app/mars-small128.pb', <b>["features"]</b>)

Thanks.

Thanks @AastaLLL
When I change my code like this then it generate this error
uff_model = uff.from_tensorflow(‘/home/cloud4/Akhtar_Vir_Env/my_new_app/mars-small128.pb’, [“features”])

Error

Using output node features
Converting to UFF graph

UffException Traceback (most recent call last)
in ()
----> 1 uff_model = uff.from_tensorflow(‘/home/cloud4/Akhtar_Vir_Env/my_new_app/mars-small128.pb’, [“features”])

/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/conversion_helpers.py in from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
121 output_nodes=output_nodes,
122 input_replacements=input_replacements,
→ 123 name=“main”)
124
125 uff_metagraph_proto = uff_metagraph.to_uff()

/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py in convert_tf2uff_graph(cls, tf_graphdef, uff_metagraph, output_nodes, input_replacements, name)
77 while len(nodes_to_convert):
78 nodes_to_convert += cls.convert_tf2uff_node(nodes_to_convert.pop(), tf_nodes,
—> 79 uff_graph, input_replacements)
80 for output in output_nodes:
81 uff_graph.mark_output(output)

/usr/local/lib/python3.5/dist-packages/uff/converters/tensorflow/converter.py in convert_tf2uff_node(cls, name, tf_nodes, uff_graph, input_replacements)
54 return
55 if name not in tf_nodes:
—> 56 raise UffException(str(name) + " was not found in the graph. Please use the -l option to list nodes in the graph.")
57 tf_node = tf_nodes[name]
58 inputs = list(tf_node.input)

UffException: features was not found in the graph. Please use the -l option to list nodes in the graph.

Hi,

It looks like the node you required is NOT in the .pb file.

UffException: features was not found in the graph. Please use the -l option to list nodes in the graph.

Could you check if each node in the output_nodes is well-included in your model?
If we don’t miss something, the error should come from the ‘mars-small128.pb’ model doesn’t have a layer named as ‘features’.

Thanks.

Hi,

I have the same problem. I’m trying to convert the ssd_mobilenet_v1_coco_2018_01_28.

The outputs selected need to change because the uff.from_tensorflow function remove the Identity layers (and the outputs layers are of that type). This function also return a lot of warning because there are not implementation of layers. Maybe if you use anyway that uff_model when you try to do the trt.utils.uff_to_trt_engine you get that error:

AssertionError: UFF parsing failed on line 255 in statement assert(parser.parse(stream, network, model_datatype))

Do you know if someone could convert the ssd_mobilenet_v1 to uff_model?

Hi,

Do you follow the steps shared in this tutorial:
[url]https://github.com/NVIDIA-AI-IOT/tf_to_trt_image_classification[/url]

You may need to update the output configure but should be similar.
Thanks.

Hi,

I follow the next repo that it is forked from that:

I test that and everything runs ok, and the times improve in the jetson TX2 compared to the original TensorFlow model, but in a Tesla K80 the times get worse.

That code not convert the model to an UFF model and then build the engine. Only optimize the TensorFlow model with TensorRT and then continue using the TensorFlow model. For do that possible, it need to change a few layers.

However when I try to convert the original model to UFF, there are a lot of layers that are not implemented…