Description
I am trying to convert the following simple ONNX model to TensorRT:
https://cloud.safead.de/s/Wq2xm3dgZLoHXmx
Both when using trtexec or the python API it fails with:
operation.cpp:203: DCHECK(!i->is_use_only()) failed.
Environment
TensorRT Version : 8.5.2.2
GPU Type : A6000
Nvidia Driver Version : 525.60.13
CUDA Version : 11.8
CUDNN Version : 8.7.0
Operating System + Version : Ubuntu 22.04
Python Version (if applicable) : 3.10.6
PyTorch Version (if applicable) : 1.12.1
Baremetal or Container (if container which image + tag) : Baremetal
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Run trtexec --onnx=test.simple.onnx --saveEngine=test.trt
NVES
December 16, 2022, 11:37am
2
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
master/samples/trtexec
NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applicat...
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!
You can find the ONNX model in the link in my first post.
onnx.checker.check_model
does not report any error for the ONNX model.
The verbose output of trtexec can be found here: SafeAD Cloud
Hi,
We could reproduce the same error. Please allow us sometime to work on this.
Thank you.
Hello, I have met the same error, is there any progress?
The input size of matmul is so large to cause this bug