This onnx model can run with onnxruntime-gpu.
I also didn’t see any unsupported operators for onnx2trt.
But I cannot generate the trt model with trtexec. The full error message is belove:
[04/12/2023-10:54:15] [E] Error[10]: [optimizer.cpp::nvinfer1::builder::cgraph::LeafCNode::computeCosts::3728] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[668...Mul_497]}.)
[04/12/2023-10:54:15] [E] Error[2]: [builder.cpp::nvinfer1::builder::Builder::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
[04/12/2023-10:54:15] [E] Engine could not be created from network
[04/12/2023-10:54:15] [E] Building engine failed
[04/12/2023-10:54:15] [E] Failed to create engine from model or file.
[04/12/2023-10:54:15] [E] Engine set up failed
Environment
TensorRT Version: 8.5.3.1 GPU Type: RTX 3080 Nvidia Driver Version: 516.94 CUDA Version: 11.1 CUDNN Version: 8.0.5 Operating System + Version: Win10 Python Version (if applicable): 3.8.10 TensorFlow Version (if applicable): PyTorch Version (if applicable): 1.9.1 Baremetal or Container (if container which image + tag):
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!
Hi
I have checked the model by check_model.py.
And the error message is generated from trtexec.
I provided the onnx model already but not the script. The full scripts are too many and it’s not important because all the things are onnx model. It can be run with onnxruntime-gpu.
I am getting below mentioned error when trying to convert onnx to trt engine with trtexec.
Error:
onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[03/19/2024-14:46:55] [I] Finished parsing network model. Parse time: 0.0699543
[03/19/2024-14:46:55] [I] [TRT] BuilderFlag::kTF32 is set but hardware does not support TF32. Disabling TF32.
[03/19/2024-14:46:55] [I] [TRT] Graph optimization time: 0.0470747 seconds.
[03/19/2024-14:46:55] [I] [TRT] BuilderFlag::kTF32 is set but hardware does not support TF32. Disabling TF32.
[03/19/2024-14:46:55] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored.
[03/19/2024-14:46:55] [E] Error[10]: Could not find any implementation for node Conv_0.
[03/19/2024-14:46:55] [E] Error[10]: [optimizer.cpp::computeCosts::3869] Error Code 10: Internal Error (Could not find any implementation for node Conv_0.)
My tersorrt version is 8.6.1 still this error. I have also used --workspace flag but didn’t work.
Does anybody know the solution?