Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[668...Mul_497]

Description

This onnx model can run with onnxruntime-gpu.
I also didn’t see any unsupported operators for onnx2trt.
But I cannot generate the trt model with trtexec. The full error message is belove:

[04/12/2023-10:54:15] [E] Error[10]: [optimizer.cpp::nvinfer1::builder::cgraph::LeafCNode::computeCosts::3728] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[668...Mul_497]}.)
[04/12/2023-10:54:15] [E] Error[2]: [builder.cpp::nvinfer1::builder::Builder::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
[04/12/2023-10:54:15] [E] Engine could not be created from network
[04/12/2023-10:54:15] [E] Building engine failed
[04/12/2023-10:54:15] [E] Failed to create engine from model or file.
[04/12/2023-10:54:15] [E] Engine set up failed

Environment

TensorRT Version: 8.5.3.1
GPU Type: RTX 3080
Nvidia Driver Version: 516.94
CUDA Version: 11.1
CUDNN Version: 8.0.5
Operating System + Version: Win10
Python Version (if applicable): 3.8.10
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.9.1
Baremetal or Container (if container which image + tag):

Relevant Files

Here is the onnx model link.

Steps To Reproduce

trtexec.exe --onnx=model_final_1.onnx --saveEngine=model_final_1.engine --verbose

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi
I have checked the model by check_model.py.
And the error message is generated from trtexec.

I provided the onnx model already but not the script. The full scripts are too many and it’s not important because all the things are onnx model. It can be run with onnxruntime-gpu.

1 Like

Hi @p890040 ,
Apologies for delay,
Can you please try using the latest TRT version and let us know if problem still persist?

The fix has been addressed in newer releases.

Thanks