Hi @mchi, I am using the model the TensorFlow 1 Detection Model Zoo faster_rcnn_inception_v2_coco. It has the layer TensorArrayGatherV3
which is not included neither in the TRT supported layers-matrix nor in the TRT plugins,
I have tried several methods with native TRT without success:
Method 1: Parsing the model to ONNX, then convert ONNX model to a tensorrt engine
1.1 Model converted to onnx
2.2 Generating the TRT engine
$ trtexec --onnx=//faster_rcnn_inceptionv2_coco_updated_model_opset12.onnx --explicitBatch
Error:
Unsupported ONNX data type: UINT8 (2)
ERROR: image_tensor:0:189 In function importInput:
[8] Assertion failed: convertDtype(onnxDtype.elem_type(), &trtDtype)
[01/19/2021-14:52:42] [E] Failed to parse onnx file
[01/19/2021-14:52:42] [E] Parsing model failed
[01/19/2021-14:52:42] [E] Engine creation failed
[01/19/2021-14:52:42] [E] Engine set up failed
2.3 After applying a patch model to solve the Unsupported ONNX data type: UINT8 (2) issue I got a new error:
Error:
While parsing node number 7 [Loop]:
ERROR: ModelImporter.cpp:92 In function parseGraph:
[8] Assertion failed: convertOnnxWeights(initializer, &weights, ctx)
[01/19/2021-20:35:59] [E] Failed to parse onnx file
[01/19/2021-20:35:59] [E] Parsing model failed
[01/19/2021-20:35:59] [E] Engine creation failed
[01/19/2021-20:35:59] [E] Engine set up failed
Method 2: Parsing the model to UFF, then run the model with TRT
2.1 Parsing the model to uff format:
$ python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py /faster_rcnn_inception_v2_coco_2018_01_28/frozen_inference_graph.pb -o faster_rcnn_inception_v2_coco.uff
Error:
Using output node detection_boxes
Using output node detection_scores
Using output node num_detections
Using output node detection_classes
Converting to UFF graph
Warning: No conversion function registered for layer: TensorArrayGatherV3 yet.
…
Converting Preprocessor/map/while/TensorArrayReadV3/Enter as custom op: Enter
Traceback (most recent call last):
File “/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py”, line 96, in
main()
File “/usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py”, line 92, in main
debug_mode=args.debug
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/conversion_helpers.py”, line 229, in from_tensorflow_frozen_model
return from_tensorflow(graphdef, output_nodes, preprocessor, **kwargs)
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/conversion_helpers.py”, line 178, in from_tensorflow
debug_mode=debug_mode)
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/converter.py”, line 94, in convert_tf2uff_graph
uff_graph, input_replacements, debug_mode=debug_mode)
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/converter.py”, line 79, in convert_tf2uff_node
op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes, debug_mode=debug_mode)
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/converter.py”, line 41, in convert_layer
fields = cls.parse_tf_attrs(tf_node.attr)
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/converter.py”, line 222, in parse_tf_attrs
return {key: cls.parse_tf_attr_value(val) for key, val in attrs.items() if val is not None and val.WhichOneof(‘value’) is not None}
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/converter.py”, line 222, in
return {key: cls.parse_tf_attr_value(val) for key, val in attrs.items() if val is not None and val.WhichOneof(‘value’) is not None}
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/converter.py”, line 218, in parse_tf_attr_value
return cls.convert_tf2uff_field(code, val)
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/converter.py”, line 190, in convert_tf2uff_field
return TensorFlowToUFFConverter.convert_tf2numpy_dtype(val)
File “/usr/lib/python3.6/dist-packages/uff/bin/…/…/uff/converters/tensorflow/converter.py”, line 103, in convert_tf2numpy_dtype
return tf.as_dtype(dtype).as_numpy_dtype
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py”, line 126, in as_numpy_dtype
return _TF_TO_NP[self._type_enum]
KeyError: 20
What possible solution can I apply?. Should I use another model? What pre-trained object detection model do you recommend to be optimized as TRT INT8 with NMS Ops placed on the CPU and deploy with DS-Triton?, I was following this blog Deploying Models from TensorFlow Model Zoo Using NVIDIA DeepStream and NVIDIA Triton Inference Server | NVIDIA Technical Blog as an example but for some reason they didn’t include INT8 precision with NMS Ops placed on the CPU