# Load your newly created Tensorflow frozen model and convert it to UFF
uff_model = uff.from_tensorflow_frozen_model("keras_vgg19_frozen_graph.pb", ["dense_2/Softmax"])
what is the second argument for this function? Input or Output Node?
When i run this same code i get following error:
File "/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 42, in convert_tf2uff_node
tf_node = tf_nodes[name]
KeyError: 'dense_2/Softmax'
i tried it with
["fc2/Relu"]
but same result. Is there any docu which arguments are valid?
It seems like the output node names are not in the TensorFlow graph. It may help to use the tensorboard visualization tool to visualize the TensorFlow graph and determine the output name. For example, by running
import keras
import keras.backend as K
import tensorflow as tf
vgg = keras.applications.vgg19.VGG19()
sess = K.get_session()
tf.summary.FileWriter('tensorboard_logdir', sess.graph_def)
You may then visualize the graph by launching $ tensorboard --logdir=tensorboard_logdir. For me, the output node name was ‘predictions/Softmax’. Using this name I was able to freeze the graph and convert to uff as follows.
import keras
import keras.backend as K
import tensorflow as tf
import uff
output_names = ['predictions/Softmax']
frozen_graph_filename = 'keras_vgg19_frozen_graph.pb'
sess = K.get_session()
# freeze graph and remove training nodes
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, output_names)
graph_def = tf.graph_util.remove_training_nodes(graph_def)
# write frozen graph to file
with open(frozen_graph_filename, 'wb') as f:
f.write(graph_def.SerializeToString())
f.close()
# convert frozen graph to uff
uff_model = uff.from_tensorflow_frozen_model(frozen_graph_filename, output_names)
I am getting an error (specified bellow) when I run the freeze_graph.py
======= Error ==========
WARNING:tensorflow:From /home/gautam/TensorFlow/tensorflow-master/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Use the retry module or similar alternatives.
Traceback (most recent call last):
File “/home/gautam/TensorFlow/tensorflow-master/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py”, line 382, in
run_main()
File “/home/gautam/TensorFlow/tensorflow-master/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py”, line 379, in run_main
app.run(main=my_main, argv=[sys.argv[0]] + unparsed)
File “/home/gautam/TensorFlow/tensorflow-master/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/platform/app.py”, line 126, in run
_sys.exit(main(argv))
File “/home/gautam/TensorFlow/tensorflow-master/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py”, line 378, in
my_main = lambda unused_args: main(unused_args, flags)
File “/home/gautam/TensorFlow/tensorflow-master/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py”, line 272, in main
flags.saved_model_tags, checkpoint_version)
File “/home/gautam/TensorFlow/tensorflow-master/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py”, line 231, in freeze_graph
input_graph_def = _parse_input_graph_proto(input_graph, input_binary)
File “/home/gautam/TensorFlow/tensorflow-master/bazel-bin/tensorflow/python/tools/freeze_graph.runfiles/org_tensorflow/tensorflow/python/tools/freeze_graph.py”, line 174, in _parse_input_graph_proto
text_format.Merge(f.read(), input_graph_def)
File “/home/gautam/.cache/bazel/_bazel_gautam/eb9b2e55e49b116d67448eab2a287112/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py”, line 533, in Merge
descriptor_pool=descriptor_pool)
File “/home/gautam/.cache/bazel/_bazel_gautam/eb9b2e55e49b116d67448eab2a287112/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py”, line 587, in MergeLines
return parser.MergeLines(lines, message)
File “/home/gautam/.cache/bazel/_bazel_gautam/eb9b2e55e49b116d67448eab2a287112/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py”, line 620, in MergeLines
self._ParseOrMerge(lines, message)
File “/home/gautam/.cache/bazel/_bazel_gautam/eb9b2e55e49b116d67448eab2a287112/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py”, line 635, in _ParseOrMerge
self._MergeField(tokenizer, message)
File “/home/gautam/.cache/bazel/_bazel_gautam/eb9b2e55e49b116d67448eab2a287112/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py”, line 679, in _MergeField
name = tokenizer.ConsumeIdentifierOrNumber()
File “/home/gautam/.cache/bazel/_bazel_gautam/eb9b2e55e49b116d67448eab2a287112/execroot/org_tensorflow/bazel-out/k8-opt/bin/tensorflow/python/tools/freeze_graph.runfiles/protobuf_archive/python/google/protobuf/text_format.py”, line 1152, in ConsumeIdentifierOrNumber
raise self.ParseError(‘Expected identifier or number, got %s.’ % result)
google.protobuf.text_format.ParseError: 2:1 : Expected identifier or number, got `.
Looks like this error is from TensorFlow parser when importing the given model.
It’s recommended to share your issue to TensorFlower to get more information.
I have the same issue. summarize_graph tool reports the right output node (‘ssd_losses/softmax/Softmax’) but the script at #2 returns an error saying AssertionError: ssd_losses/softmax/Softmax is not in graph
list of pakages
Linux:16
Cuda:9.0
tensorRt:4
Python 3.5
Gpu:Gtx 1080
Tensorflow:1.10.1
Error
Traceback (most recent call last):
File “bbb.py”, line 11, in
graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, output_names)
File “/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/graph_util_impl.py”, line 232, in convert_variables_to_constants
inference_graph = extract_sub_graph(input_graph_def, output_node_names)
File “/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/graph_util_impl.py”, line 174, in extract_sub_graph
_assert_nodes_are_present(name_to_node, dest_nodes)
File “/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/graph_util_impl.py”, line 133, in _assert_nodes_are_present
assert d in name_to_node, “%s is not in graph” % d
AssertionError: predictions/Softmax is not in graph
OR please suggest me any code which convert tensorflow frozen graph(frozen_inference_graph.pb) to trt engine for object detection task
This error occurs when TensorFlow loading your model. Not related to the TensorRT.
Please make sure you have a layer named as ‘predictions/Softmax’ in your model.
If you don’t know the layer name of your model, here is a script for your reference:
import tensorflow as tf
FILE = 'frozen_inference_graph.pb'
graph = tf.Graph()
with graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(FILE, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
sess = tf.Session()
op = sess.graph.get_operations()
for m in op:
print m.values()
I’m encountering the same problem, but I’ve verified the node name DOES exist in the GraphDef.
def convert_from_frozen_graph(modelpath):
tf_model = get_frozen_graph(modelpath)
#pprint.pprint([n.name for n in tf_model.node])
print(tf_model.node[-1])
uff_model = uff.from_tensorflow_frozen_model(modelpath, ["output"])
parser = uffparser.create_uff_parser()
parser.register_input("input", (None, None, None, 3), 0)
parser.register_output("output")
Console output:
name: "output"
op: "Identity"
input: "concat_1"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
Using output node output
Converting to UFF graph
Traceback (most recent call last):
File "convert_to_tensorrt.py", line 57, in <module>
convert_from_frozen_graph(FLAGS.input_file)
File "convert_to_tensorrt.py", line 44, in convert_from_frozen_graph
uff_model = uff.from_tensorflow_frozen_model(modelpath, ["output"])
File "/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 149, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File "/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 120, in from_tensorflow
name="main")
File "/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 77, in convert_tf2uff_graph
uff_graph, input_replacements)
File "/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 54, in convert_tf2uff_node
raise UffException(str(name) + " was not found in the graph. Please use the -l option to list nodes in the graph.")
uff.model.exceptions.UffException: output was not found in the graph. Please use the -l option to list nodes in the graph.
In my case I think input node is images and out_put node is features
in my graph also present these nodes
" ‘num_detections’, ‘detection_boxes’, ‘detection_scores’,\n",
" ‘detection_classes’, ‘detection_masks’\n",
but I try all these nodes name not work for me genrate the same error
AssertionError Traceback (most recent call last)
in ()
9
10 # freeze graph and remove training nodes
—> 11 graph_def = tf.graph_util.convert_variables_to_constants(sess, sess.graph_def, output_names)
12 graph_def = tf.graph_util.remove_training_nodes(graph_def)
13
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/graph_util_impl.py in convert_variables_to_constants(sess, input_graph_def, output_node_names, variable_names_whitelist, variable_names_blacklist)
230 # This graph only includes the nodes needed to evaluate the output nodes, and
231 # removes unneeded nodes like those involved in saving and assignment.
→ 232 inference_graph = extract_sub_graph(input_graph_def, output_node_names)
233
234 found_variables = {}
/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/graph_util_impl.py in _assert_nodes_are_present(name_to_node, nodes)
131 “”“Assert that nodes are present in the graph.”“”
132 for d in nodes:
→ 133 assert d in name_to_node, “%s is not in graph” % d
134
135
AssertionError: detection_classes is not in graph
you can see my frozen graph using tensorboard
And the file from which I created frozen graph file
The TensorRT package may work for the models listed (even with TensorRT 4). If you run into an error related to that repository, could you please open an issue with the error log under
I am trying to convert a tensorflow .pb file to uff file using uff and it failed.
This is on TensorRT 4.1 with cuda 8.0.
import tensorflow as tf
import uff
if name == “main”:
uff.from_tensorflow(graphdef=“/home/Work/Tensorrt/Model/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb”,
output_filename=“ssd.uff”,
output_nodes=[‘detection_scores’,‘detection_boxes’, ‘detection_classes’, ‘num_detections’])
output:
import pandas.parser as _parser
Using output node detection_scores
Using output node detection_boxes
Using output node detection_classes
Using output node num_detections
Converting to UFF graph
Traceback (most recent call last):
File “convert.py”, line 7, in
output_nodes=[‘detection_scores’,‘detection_boxes’, ‘detection_classes’, ‘num_detections’])
File “/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py”, line 120, in from_tensorflow
name=“main”)
File “/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py”, line 77, in convert_tf2uff_graph
uff_graph, input_replacements)
File “/usr/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py”, line 54, in convert_tf2uff_node
raise UffException(str(name) + " was not found in the graph. Please use the -l option to list nodes in the graph.")
uff.model.exceptions.UffException: num_detections was not found in the graph. Please use the -l option to list nodes in the graph.
I am pretty sure that output node is right. Please help