How can I convert tensorflow mobilenet_ssd model to uff format

when I tried to convert tensorflow mobilenet_ssd model to uff format by using:

BOXES_NAME='detection_boxes'
CLASSES_NAME='detection_classes'
SCORES_NAME='detection_scores'
NUM_DETECTIONS_NAME='num_detections'

output_names = [BOXES_NAME, CLASSES_NAME, SCORES_NAME, NUM_DETECTIONS_NAME]
uff.from_tensorflow(graphdef="ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb",
    output_filename="ssd_mobilenet_v2_coco.uff",
    output_nodes=output_names)

I got error as:

Traceback (most recent call last):
  File "touff.py", line 57, in <module>
    output_nodes=output_names)
  File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 157, in from_tensorflow
    debug_mode=debug_mode)
  File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 108, in convert_tf2uff_graph
    uff_graph, input_replacements, debug_mode=debug_mode)
  File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 67, in convert_tf2uff_node
    raise UffException(str(name) + " was not found in the graph. Please use the -l option to list nodes in the graph.")
uff.model.exceptions.UffException: num_detections was not found in the graph. Please use the -l option to list nodes in the graph.

I am sure that the nodes called as “detection_boxes”,“detection_classes”,“detection_scores”,“num_detections” are really in my model,
because I have check them by using the code as:

tf_graph = tf.GraphDef()
with open('ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb', 'rb') as f:
    tf_graph.ParseFromString(f.read())
    with tf.Graph().as_default() as graph:
        tf.import_graph_def(tf_graph, name='')
        with open('ssd_mobilenet_v2_coco_2018_03_29/node_names.txt', 'w') as f:
            # print operations
            for op in graph.get_operations():
                print(op.name)
                f.write(op.name+"\n")
            f.close()

I put the model file at here:
https://github.com/firefoxhtjc/tf_trt_models/tree/master/frozen_model
.
Due to file size limitations, I split the file into the following three:
ssd_mobilenet_v2_coco_2018_03_29.zip
ssd_mobilenet_v2_coco_2018_03_29.z01
ssd_mobilenet_v2_coco_2018_03_29.z02

My computer envirment is base on jetson nano with “jetson-nano-sd-r32.1-2019-03-18.img”.

Could you help me?

ps:I have seen the topic as
https://devtalk.nvidia.com/default/topic/1049802/jetson-nano/object-detection-with-mobilenet-ssd-slower-than-mentioned-speed/post/5327974/#5327974

Could you tell me the method which you get “sample_unpruned_mobilenet_v2.uff” from tensorflow model step by step?
What do you mean by the word “unpruned”?

Hi,

Somehow I cannot unzip your model, could you help me to give it a check?

Would you mind to share the output log of the op.name?
Is it possible that the mismatch is caused by some prefix?

unpruned indicates the model is not pruned.
Sometimes we cut the model to save execution time.

Thanks.

hi,
Thank you for reply!
The model is the normal tensorflow model as:
http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz
Unzip it,frozen_inference_graph.pb is my test model.I didn`t modify it.It is nothing special.

When I used the code as follow:

tf_graph = tf.GraphDef()
    with open('ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb', 'rb') as f:
        tf_graph.ParseFromString(f.read())
        with tf.Graph().as_default() as graph:
            tf.import_graph_def(tf_graph, name='')
            with open('ssd_mobilenet_v2_coco_2018_03_29/node_names.txt', 'w') as f:
                # print operations
                for op in graph.get_operations():
                    print(op.name)
                    f.write(op.name+"\n")
                f.close()

I got 7975 op names as follow:
https://github.com/firefoxhtjc/tf_trt_models/tree/master/frozen_model/tfnode.txt

And I modified function “convert_tf2uff_node()” of uff repo file “converter.py”
at “/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py” as follow:

@classmethod
   def convert_tf2uff_node(cls, name, tf_nodes, uff_graph, input_replacements, debug_mode=False):
        if name in uff_graph.nodes:
            if debug_mode:
                _debug_print(name + " already in UFF graph, skipping.")
            return []
        if name in input_replacements:
            new_name, dtype, shape = input_replacements[name]
            uff_graph.input(shape, dtype, new_name)
            if debug_mode:
                _debug_print("Replacing " + name + " with: " + new_name + " of type " + str(dtype) + " with shape " + str(shape))
            return []
        print("==============start===============")
        print("tf_nodes=")
        for nodeName in tf_nodes:
             print(nodeName+"\n")
        print("==============end===============")

        if name not in tf_nodes:
            raise UffException(str(name) + " was not found in the graph. Please use the -l option to list nodes in the graph.")
        tf_node = tf_nodes[name]
        inputs = list(tf_node.input)
        if debug_mode:
            _debug_print("Converting " + str(tf_node.op) + " node " + str(tf_node.name))
        # Find any identity inputs and don't add them to the UFF graph.
        for i, inp in enumerate(inputs):
            inp_name, num = cls.split_node_name_and_output(inp)
            if debug_mode:
                _debug_print("Found input " + str(inp_name))
            inp_node = tf_nodes[inp_name]
            if inp_node.op == 'Identity':
                if debug_mode:
                    _debug_print("Removing Identity input from graph")
                inputs[i] = inp_node.input[0]
        op = tf_node.op
        uff_node = cls.convert_layer(
            op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes, debug_mode=debug_mode)
        return uff_node

I got 7452 op names as follow:
https://github.com/firefoxhtjc/tf_trt_models/tree/master/frozen_model/uffnode.txt

So many op dispeared!
And this is my log is:

DEBUG [/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:99] Creating new UFF metagraph: main
   Traceback (most recent call last):
     File "touff.py", line 57, in <module>
       output_nodes=output_names)
     File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 157, in from_tensorflow
       debug_mode=debug_mode)
     File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 108, in convert_tf2uff_graph
       uff_graph, input_replacements, debug_mode=debug_mode)
     File "/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py", line 67, in convert_tf2uff_node
       raise UffException(str(name) + " was not found in the graph. Please use the -l option to list nodes in the graph.")
   uff.model.exceptions.UffException: num_detections was not found in the graph. Please use the -l option to list nodes in the graph.

Finally,I want to say WAHT I NEED IS SOMETHING LIKE “/usr/src/tensorrt/samples/python/uff_ssd/utils/model.py”.

After I read the code of “model.py”,I guess the reason of the error above is that tensorRT 5.0.6 does not surpport all ops in mobilenet-ssd model.
Because I find even in the sample “/usr/src/tensorrt/samples/python/uff_ssd”,the plugin of FlattenConcat is needed.
Here is a part of model.py as follow:

def model_to_uff(model_path, output_uff_path, silent=False):
    """Takes frozen .pb graph, converts it to .uff and saves it to file.

    Args:
        model_path (str): .pb model path
        output_uff_path (str): .uff path where the UFF file will be saved
        silent (bool): if True, writes progress messages to stdout

    """
    dynamic_graph = gs.DynamicGraph(model_path)
    dynamic_graph = ssd_unsupported_nodes_to_plugin_nodes(dynamic_graph)
    
    uff.from_tensorflow(
        dynamic_graph.as_graph_def(),
        [ModelData.OUTPUT_NAME],
        output_filename=output_uff_path,
        text=True
    )
You can see extra work is needed before uff.from_tensorflow().
But model.py does not support mobilenet-ssd,what a pitty!

I think you have the uff converting code for mobilenet-ssd.
Because in the topic "https://devtalk.nvidia.com/default/topic/1049802/jetson-nano/object-detection-with-mobilenet-ssd-slower-than-mentioned-speed/post/5327974/#5327974",
there is a uff file of mobilenet-ssd.

Just give it to me or tell me where is it in my jetson nano,the problom will be solved.

Hi,

We have some experience for ssd_mobilenetv2 recently.
Would you mind to check if your issue can be solved with this first?

First, there is a bug in the UFF parser.
And here is a workaround to help you bypass the issue.

1.Please try to convert your .pb file with config.py :

convert-to-uff --input-file frozen_inference_graph.pb -O NMS -p config.py

2.Update your TF graph batch dimension to anything other than -1 and use the config.py shared in this comment:
https://devtalk.nvidia.com/default/topic/1050465/jetson-nano/how-to-write-config-py-for-converting-ssd-mobilenetv2-to-uff-format/post/5331289/#5331289

Thanks.