I am trying to convert the following simple ONNX model to TensorRT:
Both when using trtexec or the python API it fails with:
operation.cpp:203: DCHECK(!i->is_use_only()) failed.
TensorRT Version: 220.127.116.11
GPU Type: A6000
Nvidia Driver Version: 525.60.13
CUDA Version: 11.8
CUDNN Version: 8.7.0
Operating System + Version: Ubuntu 22.04
Python Version (if applicable): 3.10.6
PyTorch Version (if applicable): 1.12.1
Baremetal or Container (if container which image + tag): Baremetal
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
trtexec --onnx=test.simple.onnx --saveEngine=test.trt
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
- validating your model with the below snippet
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
You can find the ONNX model in the link in my first post.
onnx.checker.check_model does not report any error for the ONNX model.
The verbose output of trtexec can be found here: SafeAD Cloud
We could reproduce the same error. Please allow us sometime to work on this.
Hello, I have met the same error, is there any progress?
The input size of matmul is so large to cause this bug
Any improvement on this topic? I have the same error during Trtexec conversion.
In my case there was a torch.matmul usage with dimension broadcasting (other than batch dim). It caused this problem. When I removed the broadcast and handled it with transposes etc. it is resolved.