cudnn error when using Tesla T4

platforms
CentOS Linux release 7.5.1804 (Core)
GPU type:Tesla T4
Nvidia driver version:418.67
CUDA version:9.0.176
CUDNN version:7.6.3.30
Python version: None
Tensorflow version:None
TensorRT version:TensorRT-6.0.1.5.CentOS-7.6.x86_64-gnu.cuda-9.0.cudnn7.6

1.when I convert caffe model to TensorRT model on Tesla T4 using trtexec,an error about cudnn occured:
“ERROR: …/rtSafe/cuda/cudaConvolutionRunner.cpp (303) - Cudnn Error in execute: 8 (CUDNN_STATUS_EXECUTION_FAILED)”

  1. I test the same version of TensorRT on Tesla T4,that’s OK.

please help me,thanks.

Hi,

Could you please share your script and model file so we can better help?

Meanwhile, please try CUDA10 and CUDA10 versions of CuDNN etc.
Please refer to cuDNN support matrix:
https://docs.nvidia.com/deeplearning/sdk/cudnn-support-matrix/index.html#cudnn-cuda-hardware-versions

Thanks