tf2ONNX

I am getting this error while trying to convert to fasterRCNN model( the input tensor converted from INT8 to float32) to onnx.

WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tf2onnx/verbose_logging.py:76: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

2021-03-11 03:38:04,987 - WARNING - From /usr/local/lib/python3.6/dist-packages/tf2onnx/verbose_logging.py:76: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

2021-03-11 03:38:04.988779: E tensorflow/stream_executor/cuda/cuda_driver.cc:318] failed call to cuInit: UNKNOWN ERROR (303)
Traceback (most recent call last):
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/importer.py”, line 501, in _import_graph_def_internal
graph._c_graph, serialized, options) # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 0 of node FirstStageFeatureExtractor/InceptionV2/Conv2d_1a_7x7/depthwise_weights/Assign was passed float from FirstStageFeatureExtractor/InceptionV2/Conv2d_1a_7x7/depthwise_weights:0 incompatible with expected float_ref.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/usr/local/lib/python3.6/dist-packages/tf2onnx/convert.py”, line 235, in
main()
File “/usr/local/lib/python3.6/dist-packages/tf2onnx/convert.py”, line 160, in main
graph_def, inputs, outputs = tf_loader.from_graphdef(args.graphdef, args.inputs, args.outputs)
File “/usr/local/lib/python3.6/dist-packages/tf2onnx/tf_loader.py”, line 201, in from_graphdef
tf.graph_util.import_graph_def(graph_def, name=‘’)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/util/deprecation.py”, line 507, in new_func
return func(*args, **kwargs)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/importer.py”, line 405, in import_graph_def
producer_op_list=producer_op_list)
File “/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/importer.py”, line 505, in _import_graph_def_internal
raise ValueError(str(e))
ValueError: Input 0 of node FirstStageFeatureExtractor/InceptionV2/Conv2d_1a_7x7/depthwise_weights/Assign was passed float from FirstStageFeatureExtractor/InceptionV2/Conv2d_1a_7x7/depthwise_weights:0 incompatible with expected float_ref.

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi @NVES,

As I have already mentioned in my issue, I am not able to generate an ONNX model from a converted* tensorflow model.

Converted* = “the input tensor converted from INT8 to float32”

Thanks and Regards,
Karthik

Hi @leburi40,

This doesn’t look like TensorRT issue. We recommend you to post your query in related platform.

Thank you.

Hi @spolisetty @NVES ,

I managed to convert the onnx model directly and changed the input tensor from Int8 to float 32 and have run the commands provided by you. Please find the results below.

Before convertion:

----------------------------------------------------------------
Input filename: model.onnx
ONNX IR version: 0.0.6
Opset version: 11
Producer name: tf2onnx
Producer version: 1.8.3
Domain:
Model version: 0
Doc string:
Unsupported ONNX data type: UINT8 (2)
ERROR: image_tensor:0:188 In function importInput:
[8] Assertion failed: convertDtype(onnxDtype.elem_type(), &trtDtype)
[03/18/2021-13:37:17] [E] Failed to parse onnx file
[03/18/2021-13:37:17] [E] Parsing model failed
[03/18/2021-13:37:17] [E] Engine creation failed
[03/18/2021-13:37:17] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=model.onnx

After Conversion:

[03/18/2021-13:36:45] [I]
Input filename: model_con.onnx
ONNX IR version: 0.0.6
Opset version: 11
Producer name: tf2onnx
Producer version: 1.8.3
Domain:
Model version: 0
Doc string:
[03/18/2021-13:36:46] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[03/18/2021-13:36:46] [W] [TRT] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[03/18/2021-13:36:46] [W] [TRT] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[03/18/2021-13:36:46] [W] [TRT] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[03/18/2021-13:36:46] [I] [TRT] ModelImporter.cpp:135: No importer registered for op: Round. Attempting to import as plugin.
[03/18/2021-13:36:46] [I] [TRT] builtin_op_importers.cpp:3659: Searching for plugin: Round, plugin_version: 1, plugin_namespace:
[03/18/2021-13:36:46] [E] [TRT] INVALID_ARGUMENT: getPluginCreator could not find plugin Round version 1
ERROR: builtin_op_importers.cpp:3661 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
[03/18/2021-13:36:46] [E] Failed to parse onnx file
[03/18/2021-13:36:46] [E] Parsing model failed
[03/18/2021-13:36:46] [E] Engine creation failed
[03/18/2021-13:36:46] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=model_con.onnx

Commands to check onnx model

nvidia@nvidia-desktop:~$ python3
Python 3.6.9 (default, Oct 8 2020, 12:12:24)
[GCC 8.4.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.

import sys
import onnx
filename = “model_con.onnx”
model = onnx.load(filename)
onnx.checker.check_model(model)

Thanks in advance,
Karthik Leburi

Hi @leburi40,

Sorry for delayed response. We do support Floor/Ceil. May be these would be close enough.
Otherwise you need to implement a custom plugin for unsupported op “Round”.
For your reference,

https://github.com/NVIDIA/TensorRT/blob/master/samples/opensource/samplePlugin/README.md

Thank you.

Hi Karthik,
Can you please share your code ?
Thanks

this solve or not ? if yes, kindly please share what thing makes you fix it