Error while converting my onnx model : Could not find any implementation for node

Description

Hello,
I’m currently trying to transform an onnx model to tensorRT but I’ve issues while trying to do the convertion.

My execution line :
trtexec.exe --onnx=model.onnx --saveEngine=test --verbose=True --fp16 --workspace=1024

The error I get :

Do someone has a clue to resolve my issue ?

Thanks

Environment

TensorRT Version: 8.2.0.6
GPU Type: NVIDIA Quadro M1000M
Nvidia Driver Version: 30.0.15.1165
CUDA Version: 11.4
CUDNN Version: 8.2
Operating System + Version: Windows10

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

mini.onnx (68.9 KB)
Hello,
Thanks for your reply
I can only put a sample of my model. I think that the issue comes from a “add” layer.

You can find, my execution line with trtexec and the verbose in my first message.

Hi,

Thank you for reporting this issue. Our team will work on this.
Please allow us some time.