[TensorRT] ERROR: 2: [ltWrapper.cpp::setupHeuristic::327]

Description

[TensorRT] WARNING: Detected invalid timing cache, setup a local cache instead
[TensorRT] ERROR: 2: [ltWrapper.cpp::setupHeuristic::327] Error Code 2: Internal Error (Assertion cublasStatus == CUBLAS_STATUS_SUCCESS failed.)

Encounter this error using TensorRT-8.0.1.6.Linux.x86_64-gnu.cuda-10.2.cudnn8.2.
This error not happening in TensorRT-8.0.1.6.Linux.x86_64-gnu.cuda-11.3.cudnn8.2.

I suspect because cuda-10.2 cause this error. After apply the patch 1 and 2 this error still happen.
I want to deploy the model to Nvidia Drive AGX, so have to use cuda 10.2.

Any idea how to solve this problem?

Environment

TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered
1 Like

Hi,
Please refer to the installation steps from the below link if in case you are missing on anything
https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html
Also, we suggest you to use TRT NGC containers to avoid any system dependency related issues.
https://ngc.nvidia.com/catalog/containers/nvidia:tensorrt

Thanks!

Hi, I also encountered this problem. I reinstall the trt as instructed and install patches, but it didn’t work.
Environment: CUDA10.2 + CUDNN8.2.1 + TENSORRT-8.0.16

[TensorRT] ERROR: 2: [ltWrapper.cpp::setupHeuristic::327] Error Code 2: Internal Error (Assertion cublasStatus == CUBLAS_STATUS_SUCCESS failed.)