[02/01/2021-18:59:40] [W] [TRT] /workspace/TensorRT/parsers/onnx/onnx2trt_utils.cpp:216: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
While parsing node number 33 [OneHot]:
ERROR: /workspace/TensorRT/parsers/onnx/ModelImporter.cpp:134 In function parseGraph:
[8] No importer registered for op: OneHot
[02/01/2021-18:59:40] [E] Failed to parse onnx file
[02/01/2021-18:59:41] [E] Parsing model failed
[02/01/2021-18:59:41] [E] Engine creation failed
[02/01/2021-18:59:41] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # trtexec --explicitBatch --onnx=/workspace/models/saved_model_dialog_nlu.onnx --saveEngine=saved_model_dialog_nlu.trt
Is there a possible workaround to this? Or Is there a guide to implement what’s needed to support that operation?
Environment
TensorRT Version: 7.X GPU Type: T4 Nvidia Driver Version: 440.64.00 CUDA Version: V11.1.74 Operating System + Version: ubuntu 18.04 Python Version (if applicable): 3.8 TensorFlow Version (if applicable): 2.4.1 PyTorch Version (if applicable): n/a Baremetal or Container (if container which image + tag): n/a