Converting onnx to trt: [8] No importer registered for op: OneHot

Description

I am trying to convert an onnx model to trt with the command:

trtexec --explicitBatch --onnx=/workspace/models/saved_model_dialog_nlu.onnx --saveEngine=saved_model_dialog_nlu.trt

and I am getting the following error:

[02/01/2021-18:59:40] [W] [TRT] /workspace/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
While parsing node number 33 [OneHot]:
ERROR: /workspace/TensorRT/parsers/onnx/ModelImporter.cpp:134 In function parseGraph:
[8] No importer registered for op: OneHot
[02/01/2021-18:59:40] [E] Failed to parse onnx file
[02/01/2021-18:59:41] [E] Parsing model failed
[02/01/2021-18:59:41] [E] Engine creation failed
[02/01/2021-18:59:41] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # trtexec --explicitBatch --onnx=/workspace/models/saved_model_dialog_nlu.onnx --saveEngine=saved_model_dialog_nlu.trt

Is there a possible workaround to this? Or Is there a guide to implement what’s needed to support that operation?

Environment

TensorRT Version: 7.X
GPU Type: T4
Nvidia Driver Version: 440.64.00
CUDA Version: V11.1.74
Operating System + Version: ubuntu 18.04
Python Version (if applicable): 3.8
TensorFlow Version (if applicable): 2.4.1
PyTorch Version (if applicable): n/a
Baremetal or Container (if container which image + tag): n/a

Hi, Request you to share the ONNX model and the script so that we can assist you better.

Alongside you can try validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).

Alternatively, you can try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

Thanks!

Hi NVES,

thanks for your reply. I am using trtexec.
The onnx model checker works fine:

Python 3.6.9 (default, Oct  8 2020, 12:12:24) 
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import onnx
>>> import sys
>>> filename = "saved_model_dialog_nlu.onnx"
>>> model = onnx.load(filename)
>>> onnx.checker.check_model(model)
>>> 
>>> 
>>> 
>>> exit()

This is a large Bert model that I converted from keras to onnx and the conversion was successful.

You can pull the onnx model here:

Let me know if you need more information.

When I look at the table of supported operations for trtexec conversion I see this:

Could that be the issue? Any idea how to workaround it?

Hi @francesco.ciannella,

Yes, you need to implement custom op via plugin.
For your reference,

https://github.com/NVIDIA/TensorRT/blob/master/samples/opensource/samplePlugin/README.md

or you can divide the ONNX model.

Thank you.