Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command. https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!
I ran check_model.py snippet on my ONNX model file. It ran and finished quietly. I assume this means it is validated successfully. The full code in my check_model.py is as below. import sys import onnx filename = r"C:\mypath\mymodel.onnx" model = onnx.load(filename) onnx.checker.check_model(model, full_check=True)
As to the full log, could I work with Nvidia directly, instead of sharing such information on the public forums?
I got the same error, for me it was because my ONNX model was using opset 14 (which is not currently supported by tensorRT), I lowered it to opset 13 and it worked