[index out error] convert frozen.pb to uff

Hello

I tried to convert the frozen.pb file to uff file in jetsonNano

This is the command I used

Preformatted textsudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py /home/ubit/work/infer/frozen_inference_graph.pb -o /home/ubit/work/tmp/test.uff -O NMS -p /usr/src/tensorrt/samples/sampleUffSSD/config.py

and following output appears

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_qint8 = np.dtype([(“qint8”, np.int8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_quint8 = np.dtype([(“quint8”, np.uint8, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_qint16 = np.dtype([(“qint16”, np.int16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_quint16 = np.dtype([(“quint16”, np.uint16, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
_np_qint32 = np.dtype([(“qint32”, np.int32, 1)])
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or ‘1type’ as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / ‘(1,)type’.
np_resource = np.dtype([(“resource”, np.ubyte, 1)])
Loading /home/ubit/work/infer/frozen_inference_graph.pb
NOTE: UFF has been tested with TensorFlow 1.14.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
WARNING: To create TensorRT plugin nodes, please use the create_plugin_node function instead.
WARNING: To create TensorRT plugin nodes, please use the create_plugin_node function instead.
UFF Version 0.6.5
=== Automatically deduced input nodes ===
[name: “Input”
op: “Placeholder”
input: “Cast”
attr {
key: “dtype”
value {
type: DT_FLOAT
}
}
attr {
key: “shape”
value {
shape {
dim {
size: 1
}
dim {
size: 3
}
dim {
size: 300
}
dim {
size: 300
}
}
}
}
]

Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
Warning: No conversion function registered for layer: Cast yet.
Converting Cast as custom op: Cast
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_conf as custom op: FlattenConcat_TRT
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_5_3x3_s2_128/act_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_5_3x3_s2_128/BatchNorm_Fold/add as custom op: AddV2
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_5_3x3_s2_128/weights_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_5_1x1_64/act_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_5_1x1_64/BatchNorm_Fold/add as custom op: AddV2
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_5_1x1_64/weights_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_4_3x3_s2_256/act_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_4_3x3_s2_256/BatchNorm_Fold/add as custom op: AddV2
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_4_3x3_s2_256/weights_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_4_1x1_128/act_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_4_1x1_128/BatchNorm_Fold/add as custom op: AddV2
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_4_1x1_128/weights_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_3_3x3_s2_256/act_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_3_3x3_s2_256/BatchNorm_Fold/add as custom op: AddV2
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_3_3x3_s2_256/weights_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_3_1x1_128/act_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_3_1x1_128/BatchNorm_Fold/add as custom op: AddV2
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_3_1x1_128/weights_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_2_3x3_s2_512/act_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_2_3x3_s2_512/BatchNorm_Fold/add as custom op: AddV2
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_2_3x3_s2_512/weights_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_2_1x1_256/act_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_2_1x1_256/BatchNorm_Fold/add as custom op: AddV2
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/layer_19_1_Conv2d_2_1x1_256/weights_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/Conv_1/act_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/Conv_1/BatchNorm_Fold/add as custom op: AddV2
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/Conv_1/weights_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_16/project/act_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_16/project/BatchNorm_Fold/add as custom op: AddV2
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_16/project/weights_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: FakeQuantWithMinMaxVars yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_16/depthwise/act_quant/FakeQuantWithMinMaxVars as custom op: FakeQuantWithMinMaxVars
Warning: No conversion function registered for layer: AddV2 yet.
Converting FeatureExtractor/MobilenetV2/expanded_conv_16/depthwise/BatchNorm_Fold/add as custom op: AddV2
Traceback (most recent call last):
File “/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py”, line 96, in
main()
File “/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py”, line 92, in main
debug_mode=args.debug
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/conversion_helpers.py”, line 229, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/conversion_helpers.py”, line 178, in from_tensorflow
debug_mode=debug_mode)
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/converter.py”, line 94, in convert_tf2uff_graph
uff_graph, input_replacements, debug_mode=debug_mode)
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/converter.py”, line 79, in convert_tf2uff_node
op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes, debug_mode=debug_mode)
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/converter.py”, line 47, in convert_layer
return cls.registry_[op](name, tf_node, inputs, uff_graph, **kwargs)
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/converter_functions.py”, line 408, in convert_depthwise_conv2d_native
return _conv2d_helper(name, tf_node, inputs, uff_graph, func=“depthwise”, **kwargs)
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/converter_functions.py”, line 433, in _conv2d_helper
number_groups = int(wt.attr[‘value’].tensor.tensor_shape.dim[2].size)
File “/usr/local/lib/python3.6/dist-packages/google/protobuf/internal/containers.py”, line 208, in getitem
return self._values[key]
IndexError: list index out of range

The URL below is the address of my Google drive and contains the model I used and training.

https://drive.google.com/open?id=1xEcDTwRuKBGUgxe-T6nOh7w7f2U_01n-

can i anything about this problem ?

Thanks