Convert To Uff failing on Object Detection API Tf1 model

Hey,

I have a model trained on the object detection API on my pc that I am trying to convert to uff on the jetson so it can be loaded as a custom model into dusty-nv detectnet.

when I run the command I get an error that does not really tell me anything.

I am on Jetpack 4.5 TF<2

Output:

python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb
2021-04-14 05:22:54.339831: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Loading frozen_inference_graph.pb
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py:274: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

NOTE: UFF has been tested with TensorFlow 1.15.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
UFF Version 0.6.9
=== Automatically deduced input nodes ===
[name: "image_tensor"
op: "Placeholder"
attr {
  key: "dtype"
  value {
    type: DT_UINT8
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: -1
      }
      dim {
        size: -1
      }
      dim {
        size: -1
      }
      dim {
        size: 3
      }
    }
  }
}
]
=========================================

=== Automatically deduced output nodes ===
[name: "detection_boxes"
op: "Identity"
input: "Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack/TensorArrayGatherV3"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, name: "detection_scores"
op: "Identity"
input: "Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_1/TensorArrayGatherV3"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, name: "detection_multiclass_scores"
op: "Identity"
input: "Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_5/TensorArrayGatherV3"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, name: "detection_classes"
op: "Identity"
input: "add"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, name: "num_detections"
op: "Identity"
input: "Postprocessor/Cast_4"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, name: "raw_detection_boxes"
op: "Identity"
input: "Postprocessor/Squeeze"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, name: "raw_detection_scores"
op: "Identity"
input: "Postprocessor/convert_scores"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
]
==========================================

Using output node detection_boxes
Using output node detection_scores
Using output node detection_multiclass_scores
Using output node detection_classes
Using output node num_detections
Using output node raw_detection_boxes
Using output node raw_detection_scores
Converting to UFF graph
Warning: No conversion function registered for layer: TensorArrayGatherV3 yet.
Converting Preprocessor/map/TensorArrayStack/TensorArrayGatherV3 as custom op: TensorArrayGatherV3
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py:226: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.

Warning: No conversion function registered for layer: Exit yet.
Converting Preprocessor/map/while/Exit_2 as custom op: Exit
Warning: No conversion function registered for layer: Switch yet.
Converting Preprocessor/map/while/Switch_2 as custom op: Switch
Warning: No conversion function registered for layer: LoopCond yet.
Converting Preprocessor/map/while/LoopCond as custom op: LoopCond
Warning: No conversion function registered for layer: LogicalAnd yet.
Converting Preprocessor/map/while/LogicalAnd as custom op: LogicalAnd
Warning: No conversion function registered for layer: Less yet.
Converting Preprocessor/map/while/Less_1 as custom op: Less
Warning: No conversion function registered for layer: Enter yet.
Converting Preprocessor/map/while/Less/Enter as custom op: Enter
Warning: No conversion function registered for layer: Cast yet.
Converting Cast as custom op: Cast
Warning: No conversion function registered for layer: Merge yet.
Converting Preprocessor/map/while/Merge_1 as custom op: Merge
Warning: No conversion function registered for layer: NextIteration yet.
Converting Preprocessor/map/while/NextIteration_1 as custom op: NextIteration
Warning: No conversion function registered for layer: Switch yet.
Converting Preprocessor/map/while/Switch as custom op: Switch
Warning: No conversion function registered for layer: Merge yet.
Converting Preprocessor/map/while/Merge as custom op: Merge
Warning: No conversion function registered for layer: NextIteration yet.
Converting Preprocessor/map/while/NextIteration as custom op: NextIteration
Warning: No conversion function registered for layer: Enter yet.
Converting Preprocessor/map/while/Enter as custom op: Enter
Warning: No conversion function registered for layer: Switch yet.
Converting Preprocessor/map/while/Switch_1 as custom op: Switch
Warning: No conversion function registered for layer: Enter yet.
Converting Preprocessor/map/while/Enter_1 as custom op: Enter
Warning: No conversion function registered for layer: Less yet.
Converting Preprocessor/map/while/Less as custom op: Less
Warning: No conversion function registered for layer: Merge yet.
Converting Preprocessor/map/while/Merge_2 as custom op: Merge
Warning: No conversion function registered for layer: NextIteration yet.
Converting Preprocessor/map/while/NextIteration_2 as custom op: NextIteration
Warning: No conversion function registered for layer: TensorArrayWriteV3 yet.
Converting Preprocessor/map/while/TensorArrayWrite/TensorArrayWriteV3 as custom op: TensorArrayWriteV3
Warning: No conversion function registered for layer: ResizeBilinear yet.
Converting Preprocessor/map/while/ResizeImage/resize/ResizeBilinear as custom op: ResizeBilinear
Warning: No conversion function registered for layer: TensorArrayReadV3 yet.
Converting Preprocessor/map/while/TensorArrayReadV3 as custom op: TensorArrayReadV3
Warning: No conversion function registered for layer: Enter yet.
Converting Preprocessor/map/while/TensorArrayReadV3/Enter_1 as custom op: Enter
Warning: No conversion function registered for layer: TensorArrayScatterV3 yet.
Converting Preprocessor/map/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3 as custom op: TensorArrayScatterV3
Warning: No conversion function registered for layer: TensorArrayV3 yet.
Converting Preprocessor/map/TensorArray as custom op: TensorArrayV3
Warning: No conversion function registered for layer: Range yet.
Converting Preprocessor/map/TensorArrayUnstack/range as custom op: Range
Warning: No conversion function registered for layer: Enter yet.
Converting Preprocessor/map/while/TensorArrayReadV3/Enter as custom op: Enter
Traceback (most recent call last):
  File "/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py", line 143, in <module>
    main()
  File "/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py", line 139, in main
    debug_mode=args.debug
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py", line 276, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py", line 225, in from_tensorflow
    debug_mode=debug_mode)
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py", line 141, in convert_tf2uff_graph
    uff_graph, input_replacements, debug_mode=debug_mode)
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py", line 126, in convert_tf2uff_node
    op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes, debug_mode=debug_mode)
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py", line 88, in convert_layer
    fields = cls.parse_tf_attrs(tf_node.attr)
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py", line 269, in parse_tf_attrs
    return {key: cls.parse_tf_attr_value(val) for key, val in attrs.items() if val is not None and val.WhichOneof('value') is not None}
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py", line 269, in <dictcomp>
    return {key: cls.parse_tf_attr_value(val) for key, val in attrs.items() if val is not None and val.WhichOneof('value') is not None}
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py", line 265, in parse_tf_attr_value
    return cls.convert_tf2uff_field(code, val)
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py", line 237, in convert_tf2uff_field
    return TensorFlowToUFFConverter.convert_tf2numpy_dtype(val)
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py", line 150, in convert_tf2numpy_dtype
    return tf.as_dtype(dtype).as_numpy_dtype
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/dtypes.py", line 126, in as_numpy_dtype
    return _TF_TO_NP[self._type_enum]
KeyError: 20

Any thoughts? It is a transfer trained ssd_mobilenet_v2_coco_2018_03_29 if that is anything to do with it but it was in TF1 model zoo so I think should be ok?

Thanks :)

Hi

Have you removed the training mode first?
If not, could you give it a try?

For example:

frozen_graph = tf.graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), [output_names])
frozen_graph = tf.graph_util.remove_training_nodes(frozen_graph)

with open(filename, "wb") as ofile:
    ofile.write(frozen_graph.SerializeToString())

Thanks.

please excuse my ignorance but where would I incorporate that code in the process? I created the frozen graph using https://github.com/tensorflow/models/blob/master/research/object_detection/export_inference_graph.py

the model itself was trained using: https://github.com/tensorflow/models/blob/master/research/object_detection/model_main.py

I tried to run inference just using frozen graph on jetson but it is way too slow and not enough memory either so I need to convert to uff as I understand it in order to then build TRT fp16 engines and run them? Hoping to be able to convert it and run using dusty-nv detectnet. Trying to avoid having to retrain for 30 hours using the dusty-nv ssd trainer as my labels aren’t set up for that either even more work.

Thanks.

I never got this solved buy if anyone has similar issues to me with training the dusty-nv detect net on a custom data set on their PC, Dusty kindly helped me get it working so I no longer have to convert this model: https://forums.developer.nvidia.com/t/dusty-nv-jetson-training-custom-data-sets-generating-labels/175008/18

hey I finally figured out how to find the outputs using graph surgeon and it returns this as outputs:

2021-05-10 12:41:25.248557: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
outputs: [name: "detection_boxes"
op: "Identity"
input: "Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack/TensorArrayGatherV3"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, name: "detection_scores"
op: "Identity"
input: "Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_1/TensorArrayGatherV3"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, name: "detection_multiclass_scores"
op: "Identity"
input: "Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_5/TensorArrayGatherV3"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, name: "detection_classes"
op: "Identity"
input: "add"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, name: "num_detections"
op: "Identity"
input: "Postprocessor/Cast_4"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, name: "raw_detection_boxes"
op: "Identity"
input: "Postprocessor/Squeeze"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, name: "raw_detection_scores"
op: "Identity"
input: "Postprocessor/convert_scores"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
]

So I plugged them into your code and it now claims the outputs are not in the graph! Which they clearly are, am I inputting them incorrectly?

code:

import tensorflow as tf


model = 'mobilenetV2-objectAPI/frozen_inference_graph.pb'

detection_graph = tf.Graph()
with detection_graph.as_default():
    od_graph_def = tf.compat.v1.GraphDef()
    with tf.Session(graph=detection_graph) as sess:

        frozen_graph = tf.graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), ["detection_boxes",
																										"detection_scores",
																										"detection_multiclass_scores",
																										"detection_classes",
																										"num_detections",
																										"raw_detection_boxes",
																										"raw_detection_scores"])
        frozen_graph = tf.graph_util.remove_training_nodes(frozen_graph)

        with open('edited_graph.pb', "wb") as ofile:
            ofile.write(frozen_graph.SerializeToString())

Output:

  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/graph_util_impl.py", line 152, in _assert_nodes_are_present
    assert d in name_to_node, "%s is not in graph" % d
AssertionError: detection_boxes is not in graph

thanks.

would be great if I can convert this object api re-trained model for use on jetson as the trained ssd using pytorch is not as good .

Note: when I try to laod this model as an onnx into detectNet it says it has wrong dtype of uint8. So I converted the pb to a pb without uint8 and that produces the same error claiming the outputs are not present…

EDIT: I just tried to do the uff conversion again using a config file you suggested in a different post:

https://forums.developer.nvidia.com/t/problem-converting-onnx-model-to-tensorrt-engine-for-ssd-mobilenet-v2/139337/30

and I got this output:

2021-05-10 13:09:42.208700: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Loading frozen_inference_graph.pb
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py:274: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

NOTE: UFF has been tested with TensorFlow 1.15.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
Traceback (most recent call last):
  File "/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py", line 143, in <module>
    main()
  File "/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py", line 139, in main
    debug_mode=args.debug
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py", line 276, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py", line 152, in from_tensorflow
    pre = importlib.import_module(os.path.splitext(os.path.basename(preprocessor))[0])
  File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 994, in _gcd_import
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/home/nvidia/models/toconvert/config.py", line 23, in <module>
    shape=[1, 3, 300, 300])
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../graphsurgeon/node_manipulation.py", line 150, in create_node
    node = NodeDef()
NameError: name 'NodeDef' is not defined

Thanks