Error while converting my onnx model : Could not find any implementation for node


I’m currently trying to transform an onnx model to tensorRT but I’ve issues while trying to do the convertion.

My execution line :
trtexec.exe --onnx=model.onnx --saveEngine=test --verbose=True --fp16 --workspace=1024

The error I get :

Do someone has a clue to resolve my issue ?



TensorRT Version:
GPU Type: NVIDIA Quadro M1000M
Nvidia Driver Version:
CUDA Version: 11.4
CUDNN Version: 8.2
Operating System + Version: Windows10

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging

mini.onnx (68.9 KB)
Thanks for your reply
I can only put a sample of my model. I think that the issue comes from a “add” layer.

You can find, my execution line with trtexec and the verbose in my first message.


Thank you for reporting this issue. Our team will work on this.
Please allow us some time.