when I tried to convert tensorflow mobilenet_ssd model to uff format by using:
BOXES_NAME='detection_boxes'
CLASSES_NAME='detection_classes'
SCORES_NAME='detection_scores'
NUM_DETECTIONS_NAME='num_detections'
output_names = [BOXES_NAME, CLASSES_NAME, SCORES_NAME, NUM_DETECTIONS_NAME]
uff.from_tensorflow(graphdef="ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb",
output_filename="ssd_mobilenet_v2_coco.uff",
output_nodes=output_names)
I got error as:
Traceback (most recent call last):
File "touff.py", line 57, in <module>
output_nodes=output_names)
File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 157, in from_tensorflow
debug_mode=debug_mode)
File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 108, in convert_tf2uff_graph
uff_graph, input_replacements, debug_mode=debug_mode)
File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 67, in convert_tf2uff_node
raise UffException(str(name) + " was not found in the graph. Please use the -l option to list nodes in the graph.")
uff.model.exceptions.UffException: num_detections was not found in the graph. Please use the -l option to list nodes in the graph.
I am sure that the nodes called as “detection_boxes”,“detection_classes”,“detection_scores”,“num_detections” are really in my model,
because I have check them by using the code as:
tf_graph = tf.GraphDef()
with open('ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb', 'rb') as f:
tf_graph.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(tf_graph, name='')
with open('ssd_mobilenet_v2_coco_2018_03_29/node_names.txt', 'w') as f:
# print operations
for op in graph.get_operations():
print(op.name)
f.write(op.name+"\n")
f.close()
I put the model file at here:
https://github.com/firefoxhtjc/tf_trt_models/tree/master/frozen_model
.
Due to file size limitations, I split the file into the following three:
ssd_mobilenet_v2_coco_2018_03_29.zip
ssd_mobilenet_v2_coco_2018_03_29.z01
ssd_mobilenet_v2_coco_2018_03_29.z02
My computer envirment is base on jetson nano with “jetson-nano-sd-r32.1-2019-03-18.img”.
Could you help me?
ps:I have seen the topic as
https://devtalk.nvidia.com/default/topic/1049802/jetson-nano/object-detection-with-mobilenet-ssd-slower-than-mentioned-speed/post/5327974/#5327974
Could you tell me the method which you get “sample_unpruned_mobilenet_v2.uff” from tensorflow model step by step?
What do you mean by the word “unpruned”?