sampleUffSSD does not work

Description

Hello,
I Challenge to conversion sampleUffSSD’s uff File Follow the reference below.
TensorRT/README.md at master · NVIDIA/TensorRT · GitHub
But, When I run [convert-to-uff ssd_inception_v2_coco_2017_11_17 / frozen_inference_graph.pb -O NMS -p config.py] ,
I get the following error and cannot convert.

(TF115) D:\TensorRT-7.2.3.4\samples\sampleUffSSD>convert-to-uff ssd_inception_v2_coco_2017_11_17/frozen_inference_graph.pb -O NMS -p config.py
2021-05-27 11:57:15.648921: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
Loading ssd_inception_v2_coco_2017_11_17/frozen_inference_graph.pb
WARNING:tensorflow:From c:\users\tats-kobayashi\appdata\local\programs\python\python36\env\tf115\lib\site-packages\uff\converters\tensorflow\conversion_helpers.py:274: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

NOTE: UFF has been tested with TensorFlow 1.15.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
UFF Version 0.6.9
=== Automatically deduced input nodes ===
[name: “Input”
op: “Placeholder”
attr {
key: “dtype”
value {
type: DT_FLOAT
}
}
attr {
key: “shape”
value {
shape {
dim {
size: 1
}
dim {
size: 3
}
dim {
size: 300
}
dim {
size: 300
}
}
}
}
]

Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
WARNING:tensorflow:From c:\users\tats-kobayashi\appdata\local\programs\python\python36\env\tf115\lib\site-packages\uff\converters\tensorflow\converter.py:226: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.

Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_conf as custom op: FlattenConcat_TRT
Traceback (most recent call last):
File “C:\Users\tats-kobayashi\AppData\Local\Programs\Python\Python36\Lib\runpy.py”, line 193, in run_module_as_main
main”, mod_spec)
File “C:\Users\tats-kobayashi\AppData\Local\Programs\Python\Python36\Lib\runpy.py”, line 85, in run_code
exec(code, run_globals)
File "C:\Users\tats-kobayashi\AppData\Local\Programs\Python\Python36\env\TF115\Scripts\convert-to-uff.exe_main
.py", line 9, in
File “c:\users\tats-kobayashi\appdata\local\programs\python\python36\env\tf115\lib\site-packages\uff\bin\convert_to_uff.py”, line 139, in main
debug_mode=args.debug
File “c:\users\tats-kobayashi\appdata\local\programs\python\python36\env\tf115\lib\site-packages\uff\converters\tensorflow\conversion_helpers.py”, line 276, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File “c:\users\tats-kobayashi\appdata\local\programs\python\python36\env\tf115\lib\site-packages\uff\converters\tensorflow\conversion_helpers.py”, line 225, in from_tensorflow
debug_mode=debug_mode)
File “c:\users\tats-kobayashi\appdata\local\programs\python\python36\env\tf115\lib\site-packages\uff\converters\tensorflow\converter.py”, line 141, in convert_tf2uff_graph
uff_graph, input_replacements, debug_mode=debug_mode)
File “c:\users\tats-kobayashi\appdata\local\programs\python\python36\env\tf115\lib\site-packages\uff\converters\tensorflow\converter.py”, line 126, in convert_tf2uff_node
op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes, debug_mode=debug_mode)
File “c:\users\tats-kobayashi\appdata\local\programs\python\python36\env\tf115\lib\site-packages\uff\converters\tensorflow\converter.py”, line 94, in convert_layer
return cls.registry
[op](name, tf_node, inputs, uff_graph, **kwargs)
File “c:\users\tats-kobayashi\appdata\local\programs\python\python36\env\tf115\lib\site-packages\uff\converters\tensorflow\converter_functions.py”, line 455, in convert_depthwise_conv2d_native
return _conv2d_helper(name, tf_node, inputs, uff_graph, func=“depthwise”, **kwargs)
File “c:\users\tats-kobayashi\appdata\local\programs\python\python36\env\tf115\lib\site-packages\uff\converters\tensorflow\converter_functions.py”, line 480, in _conv2d_helper
number_groups = int(wt.attr[‘value’].tensor.tensor_shape.dim[2].size)
IndexError: list index (2) out of range

Please tell me the solution

Environment

TensorRT Version: 1.15.3
GPU Type: RTX2070
Nvidia Driver Version: 456.81
CUDA Version: 10.0
CUDNN Version: 7.6.0
Operating System + Version: Windows10 64bit
Python Version (if applicable): 3.6.5
TensorFlow Version (if applicable): 1.15.3
PyTorch Version (if applicable): Not use
**uff:0.6.9
**graphsurgeon:0.4.5

Relevant Files

I’m using the TensorRT sample file as is

Steps To Reproduce

The procedure is as shown in the official reference below.
(https://github.com/NVIDIA/TensorRT/blob/master/samples/opensource/sampleUffSSD/README.md)

Hi,
Please refer to the installation steps from the below link if in case you are missing on anything

However suggested approach is to use TRT NGC containers to avoid any system dependency related issues.

In order to run python sample, make sure TRT python packages are installed while using NGC container.
/opt/tensorrt/python/python_setup.sh
Thanks!

Hi @tats-kobayashi,

UFF and Caffe Parser have been deprecated from TensorRT 7 onwards, hence request you to try ONNX parser. Please check the below links for the same.

SampleSSD,
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleSSD

Thanks!

Thank you for your reply.
I will try Onnx

1 Like