CUDA and TensorRT Compatibility

I’m heaving some issues with cuda and tensorrt compatibility. During installation I’ve followed this official website from NVIDIA

https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-861/support-matrix/index.html

My Environment

TensorRT Version: tensorrt 8.6.1.6-1+cuda12.0
GPU Type: A6000
Nvidia Driver Version: 525.125.06
CUDA Version: 12.0
CUDNN Version: 8.9.0
Operating System + Version: Ubuntu 22.04
Python Version (if applicable): 3.10.12
PyTorch Lightning Version (if applicable): 1.9.5
PyTorch Version (if applicable): 2.0.1

But the error says

[TRT] [E] 1: [reformat.cpp::executeCutensor::329] Error Code 1: CuTensor (Internal cuTensor permutate execute failed)
[TRT] [E] 1: [checkMacros.cpp::catchCudaError::203] Error Code 1: Cuda Runtime (invalid resource handle)

Thanks in advance.

Hi,
Please refer to the installation steps from the below link if in case you are missing on anything

Also, we suggest you to use TRT NGC containers to avoid any system dependency related issues.

Thanks!