TensorRT CUDNN CUDA compatibility issue

Description

Hi there, I am a little confused as to which cuDNN and TensorRT versions are compatible. I want(need) to use TensorRT-8.2.4.2 for a project (I had TensorRT-8.4.0.6 before but I’m bound to use 8.2). I have CUDA 11.3 running on my system and I have downloaded cuDNN 8.4.0 which is stated to be compatible with all CUDA 11.X. However, I am starting to doubt if it works together with TensorRT-8.2.4.2. Since I keep getting the following error when I try to convert a pytorch model to tensorrt:

RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED

I have installed cuDNN using tar and the following commands:

$ sudo cp cudnn--archive/include/cudnn.h /usr/local/cuda/include
$ sudo cp -P cudnn--archive/lib/libcudnn /usr/local/cuda/lib64
$ sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*

Are my TensorRT and cuDNN versions not compatible? Should I rather install an older version such as cuDNN v8.2.1 (June 7th, 2021) for CUDA 11.x? I cannot find any compatibility matrix that relates cudnn and tensorrt.

Environment

TensorRT Version: 8.2.4.2
GPU Type: NVidia RTX A2000
Nvidia Driver Version: 470.103.01
CUDA Version: 11.3
CUDNN Version: 8.4.0
Operating System + Version:
Python Version (if applicable): 3.7.13
PyTorch Version (if applicable): 1.11.0
Baremetal or Container (if container which image + tag):

Hi,

Please refer TensorRT support matrix doc to get clear info on the compatibility.

Thank you.