Convert To Uff failing on Object Detection API Tf1 model

Hey,

I have a model trained on the object detection API on my pc that I am trying to convert to uff on the jetson so it can be loaded as a custom model into dusty-nv detectnet.

when I run the command I get an error that does not really tell me anything.

I am on Jetpack 4.5 TF<2

Output:

python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb
2021-04-14 05:22:54.339831: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Loading frozen_inference_graph.pb
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py:274: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

NOTE: UFF has been tested with TensorFlow 1.15.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
UFF Version 0.6.9
=== Automatically deduced input nodes ===
[name: "image_tensor"
op: "Placeholder"
attr {
  key: "dtype"
  value {
    type: DT_UINT8
  }
}
attr {
  key: "shape"
  value {
    shape {
      dim {
        size: -1
      }
      dim {
        size: -1
      }
      dim {
        size: -1
      }
      dim {
        size: 3
      }
    }
  }
}
]
=========================================

=== Automatically deduced output nodes ===
[name: "detection_boxes"
op: "Identity"
input: "Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack/TensorArrayGatherV3"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, name: "detection_scores"
op: "Identity"
input: "Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_1/TensorArrayGatherV3"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, name: "detection_multiclass_scores"
op: "Identity"
input: "Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack_5/TensorArrayGatherV3"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, name: "detection_classes"
op: "Identity"
input: "add"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, name: "num_detections"
op: "Identity"
input: "Postprocessor/Cast_4"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, name: "raw_detection_boxes"
op: "Identity"
input: "Postprocessor/Squeeze"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
, name: "raw_detection_scores"
op: "Identity"
input: "Postprocessor/convert_scores"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
]
==========================================

Using output node detection_boxes
Using output node detection_scores
Using output node detection_multiclass_scores
Using output node detection_classes
Using output node num_detections
Using output node raw_detection_boxes
Using output node raw_detection_scores
Converting to UFF graph
Warning: No conversion function registered for layer: TensorArrayGatherV3 yet.
Converting Preprocessor/map/TensorArrayStack/TensorArrayGatherV3 as custom op: TensorArrayGatherV3
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py:226: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.

Warning: No conversion function registered for layer: Exit yet.
Converting Preprocessor/map/while/Exit_2 as custom op: Exit
Warning: No conversion function registered for layer: Switch yet.
Converting Preprocessor/map/while/Switch_2 as custom op: Switch
Warning: No conversion function registered for layer: LoopCond yet.
Converting Preprocessor/map/while/LoopCond as custom op: LoopCond
Warning: No conversion function registered for layer: LogicalAnd yet.
Converting Preprocessor/map/while/LogicalAnd as custom op: LogicalAnd
Warning: No conversion function registered for layer: Less yet.
Converting Preprocessor/map/while/Less_1 as custom op: Less
Warning: No conversion function registered for layer: Enter yet.
Converting Preprocessor/map/while/Less/Enter as custom op: Enter
Warning: No conversion function registered for layer: Cast yet.
Converting Cast as custom op: Cast
Warning: No conversion function registered for layer: Merge yet.
Converting Preprocessor/map/while/Merge_1 as custom op: Merge
Warning: No conversion function registered for layer: NextIteration yet.
Converting Preprocessor/map/while/NextIteration_1 as custom op: NextIteration
Warning: No conversion function registered for layer: Switch yet.
Converting Preprocessor/map/while/Switch as custom op: Switch
Warning: No conversion function registered for layer: Merge yet.
Converting Preprocessor/map/while/Merge as custom op: Merge
Warning: No conversion function registered for layer: NextIteration yet.
Converting Preprocessor/map/while/NextIteration as custom op: NextIteration
Warning: No conversion function registered for layer: Enter yet.
Converting Preprocessor/map/while/Enter as custom op: Enter
Warning: No conversion function registered for layer: Switch yet.
Converting Preprocessor/map/while/Switch_1 as custom op: Switch
Warning: No conversion function registered for layer: Enter yet.
Converting Preprocessor/map/while/Enter_1 as custom op: Enter
Warning: No conversion function registered for layer: Less yet.
Converting Preprocessor/map/while/Less as custom op: Less
Warning: No conversion function registered for layer: Merge yet.
Converting Preprocessor/map/while/Merge_2 as custom op: Merge
Warning: No conversion function registered for layer: NextIteration yet.
Converting Preprocessor/map/while/NextIteration_2 as custom op: NextIteration
Warning: No conversion function registered for layer: TensorArrayWriteV3 yet.
Converting Preprocessor/map/while/TensorArrayWrite/TensorArrayWriteV3 as custom op: TensorArrayWriteV3
Warning: No conversion function registered for layer: ResizeBilinear yet.
Converting Preprocessor/map/while/ResizeImage/resize/ResizeBilinear as custom op: ResizeBilinear
Warning: No conversion function registered for layer: TensorArrayReadV3 yet.
Converting Preprocessor/map/while/TensorArrayReadV3 as custom op: TensorArrayReadV3
Warning: No conversion function registered for layer: Enter yet.
Converting Preprocessor/map/while/TensorArrayReadV3/Enter_1 as custom op: Enter
Warning: No conversion function registered for layer: TensorArrayScatterV3 yet.
Converting Preprocessor/map/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3 as custom op: TensorArrayScatterV3
Warning: No conversion function registered for layer: TensorArrayV3 yet.
Converting Preprocessor/map/TensorArray as custom op: TensorArrayV3
Warning: No conversion function registered for layer: Range yet.
Converting Preprocessor/map/TensorArrayUnstack/range as custom op: Range
Warning: No conversion function registered for layer: Enter yet.
Converting Preprocessor/map/while/TensorArrayReadV3/Enter as custom op: Enter
Traceback (most recent call last):
  File "/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py", line 143, in <module>
    main()
  File "/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py", line 139, in main
    debug_mode=args.debug
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py", line 276, in from_tensorflow_frozen_model
    return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/conversion_helpers.py", line 225, in from_tensorflow
    debug_mode=debug_mode)
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py", line 141, in convert_tf2uff_graph
    uff_graph, input_replacements, debug_mode=debug_mode)
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py", line 126, in convert_tf2uff_node
    op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes, debug_mode=debug_mode)
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py", line 88, in convert_layer
    fields = cls.parse_tf_attrs(tf_node.attr)
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py", line 269, in parse_tf_attrs
    return {key: cls.parse_tf_attr_value(val) for key, val in attrs.items() if val is not None and val.WhichOneof('value') is not None}
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py", line 269, in <dictcomp>
    return {key: cls.parse_tf_attr_value(val) for key, val in attrs.items() if val is not None and val.WhichOneof('value') is not None}
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py", line 265, in parse_tf_attr_value
    return cls.convert_tf2uff_field(code, val)
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py", line 237, in convert_tf2uff_field
    return TensorFlowToUFFConverter.convert_tf2numpy_dtype(val)
  File "/usr/lib/python3.6/dist-packages/uff/bin/../../uff/converters/tensorflow/converter.py", line 150, in convert_tf2numpy_dtype
    return tf.as_dtype(dtype).as_numpy_dtype
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/dtypes.py", line 126, in as_numpy_dtype
    return _TF_TO_NP[self._type_enum]
KeyError: 20

Any thoughts? It is a transfer trained ssd_mobilenet_v2_coco_2018_03_29 if that is anything to do with it but it was in TF1 model zoo so I think should be ok?

Thanks :)

Hi

Have you removed the training mode first?
If not, could you give it a try?

For example:

frozen_graph = tf.graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), [output_names])
frozen_graph = tf.graph_util.remove_training_nodes(frozen_graph)

with open(filename, "wb") as ofile:
    ofile.write(frozen_graph.SerializeToString())

Thanks.

please excuse my ignorance but where would I incorporate that code in the process? I created the frozen graph using https://github.com/tensorflow/models/blob/master/research/object_detection/export_inference_graph.py

the model itself was trained using: https://github.com/tensorflow/models/blob/master/research/object_detection/model_main.py

I tried to run inference just using frozen graph on jetson but it is way too slow and not enough memory either so I need to convert to uff as I understand it in order to then build TRT fp16 engines and run them? Hoping to be able to convert it and run using dusty-nv detectnet. Trying to avoid having to retrain for 30 hours using the dusty-nv ssd trainer as my labels aren’t set up for that either even more work.

Thanks.

I never got this solved buy if anyone has similar issues to me with training the dusty-nv detect net on a custom data set on their PC, Dusty kindly helped me get it working so I no longer have to convert this model: https://forums.developer.nvidia.com/t/dusty-nv-jetson-training-custom-data-sets-generating-labels/175008/18