I load 3 tensorrt engines in a python file. The first two are simple models with several conv+bn+relu, the third model is a gan based on unet-256. All models are using tensorrt parse from onnx. The model sequence recorded as abc. While I warm up it returns an error like follows and the results are images with RGB equal to 127.

[TensorRT] ERROR: 1: [ltWrapper.cpp::plainGemm::483] Error Code 1: Cublas (CUBLAS_STATUS_EXECUTION_FAILED)

Then I change the sequence as cab, it returns the right results and no error.


TensorRT Version: 8.0.1
GPU Type: RTX 2080ti
Nvidia Driver Version: 470.57
CUDA Version: 11.3
CUDNN Version: 8.2.0
Operating System + Version: ubuntu18.04
Python Version (if applicable): 3.9
PyTorch Version (if applicable): 1.11

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet


import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging

Sorry, I can’t upload my model because of my company. And thanks, I will try your solution.

Hi @nnx156925,

There was a known issue with the above error and got fixed on later versions of the TensorRT.
We recommend you to please try on the latest TensorRT version. Also please make sure other dependencies are correctly installed by checking Support Matrix :: NVIDIA Deep Learning TensorRT Documentation
Installation Guide :: NVIDIA Deep Learning TensorRT Documentation

Thank you.