Description
[TensorRT] WARNING: Detected invalid timing cache, setup a local cache instead
[TensorRT] ERROR: 2: [ltWrapper.cpp::setupHeuristic::327] Error Code 2: Internal Error (Assertion cublasStatus == CUBLAS_STATUS_SUCCESS failed.)
Encounter this error using TensorRT-8.0.1.6.Linux.x86_64-gnu.cuda-10.2.cudnn8.2.
This error not happening in TensorRT-8.0.1.6.Linux.x86_64-gnu.cuda-11.3.cudnn8.2.
I suspect because cuda-10.2 cause this error. After apply the patch 1 and 2 this error still happen.
I want to deploy the model to Nvidia Drive AGX, so have to use cuda 10.2.
Any idea how to solve this problem?
Environment
TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered