Elementwise layer does not support the given inputs and operator


I have a custom model that I have trained using Tensorflow 2.10 and exported to an ONNX model using tf2onnx (python interface, not command line). When importing this model using trtexec I get the following output

[11/24/2022-06:14:39] [V] [TRT] Parsing node: StatefulPartitionedCall/count_nonzero/NotEqual [Equal]
[11/24/2022-06:14:39] [V] [TRT] Searching for input: StatefulPartitionedCall/Greater:0
[11/24/2022-06:14:39] [V] [TRT] Searching for input: StatefulPartitionedCall/count_nonzero/zeros:0
[11/24/2022-06:14:39] [V] [TRT] StatefulPartitionedCall/count_nonzero/NotEqual [Equal] inputs: [StatefulPartitionedCall/Greater:0 -> (1, 100)[BOOL]], [StatefulPartitionedCall/count_nonzero/zeros:0 -> ()[BOOL]], 
[11/24/2022-06:14:39] [E] [TRT] parsers/onnx/ModelImporter.cpp:780: While parsing node number 1280 [Equal -> "StatefulPartitionedCall/count_nonzero/NotEqual:0"]:
[11/24/2022-06:14:39] [E] [TRT] parsers/onnx/ModelImporter.cpp:781: --- Begin node ---
[11/24/2022-06:14:39] [E] [TRT] parsers/onnx/ModelImporter.cpp:782: input: "StatefulPartitionedCall/Greater:0"
input: "StatefulPartitionedCall/count_nonzero/zeros:0"
output: "StatefulPartitionedCall/count_nonzero/NotEqual:0"
name: "StatefulPartitionedCall/count_nonzero/NotEqual"
op_type: "Equal"

[11/24/2022-06:14:39] [E] [TRT] parsers/onnx/ModelImporter.cpp:783: --- End node ---
[11/24/2022-06:14:39] [E] [TRT] parsers/onnx/ModelImporter.cpp:785: ERROR: parsers/onnx/onnx2trt_utils.cpp:888 In function elementwiseHelper:
[8] Assertion failed: elementwiseCheck(inputs, binary_op) && "Elementwise layer does not support the given inputs and operator."
[11/24/2022-06:14:39] [E] Failed to parse onnx file
[11/24/2022-06:14:39] [I] Finish parsing network model
[11/24/2022-06:14:39] [E] Parsing model failed
[11/24/2022-06:14:39] [E] Failed to create engine from model.
[11/24/2022-06:14:39] [E] Engine set up failed

What precisely is wrong with this model?


TensorRT Version:
GPU Type: GeForce RTX 3060 Mobile
Nvidia Driver Version: 520.56.06
CUDA Version: 11.8
CUDNN Version: 8.4
Operating System + Version: Arch Linux
Python Version (if applicable):
TensorFlow Version (if applicable): 2.10
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Please refer to below links related custom plugin implementation and sample:

While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.



I not using any custom plugins. Just the default plugins that trtexec loads.


Could you please try on the latest TensorRT version 8.5.1 and let us know if you still face this issue.
Please share with us the ONNX model and the trtexec --verbose logs for better debugging.

Thank you.