No importer registered for op: Einsum


I execute this command
TensorRT- --onnx=action.onnx --saveEngine=action.trt

But failed


TensorRT Version:
GPU Type: V100
Nvidia Driver Version: 440.64.00
CUDA Version: 10.2
CUDNN Version: 7.6
Operating System + Version: ubuntu18.04
Python Version (if applicable): 3.7
TensorFlow Version (if applicable): None
PyTorch Version (if applicable): 1.6
Baremetal or Container (if container which image + tag): Baremetal

Relevant Files

[06/04/2020-18:07:13] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
While parsing node number 56 [Einsum]:
ERROR: ModelImporter.cpp:134 In function parseGraph:
[8] No importer registered for op: Einsum
[06/04/2020-18:07:13] [E] Failed to parse onnx file
[06/04/2020-18:07:13] [E] Parsing model failed
[06/04/2020-18:07:13] [E] Engine creation failed
[06/04/2020-18:07:13] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # /home/juzheng/soft/TensorRT- --onnx=action.onnx --saveEngine=action.trt

TensorRT 7.0 supports operators up to Opset 11 and Einsum is only going to be part of the ONNX spec from opset 12.
Please refer below link:


Thank you for your answer.
So which version of TensorRT will support Einsum.
And is there any alternativeļ¼Ÿ

You have to create a custom plugin in TRT to add support for Einsum.


sorry, I have no ability to implement Einsum function.
I hope you have a simpler solution.

You thing you can try is to convert your Pytorch model to Tensorflow model using following flow:
torch -> ONNX -> .pb
and user TF-TRT API to optimize your model. It will skip the unsupported layer and keep it as tensorflow op.

But for improved performance custom plugin will be the best approach.