Tesla T4: Cudnn Error in execute: 8 (CUDNN_STATUS_EXECUTION_FAILED)

First, I can run successfully in P4, the enviorment is:

Centos 7
Cuda 9.0
Cudnn 7.5
TensorRT5.1.5-Cuda9.0-Cudnn7.5

But when I move code to T4, the enviorment is:

Centos 7
Cuda 10.0
Cudnn 7.5
TensorRT5.1.5-Cuda10.0-Cudnn7.5

I can build the code, but failed in runtime, the error message is :

ERROR: cuda/cudaConvolutionLayer.cpp (238) - Cudnn Error in execute: 8 (CUDNN_STATUS_EXECUTION_FAILED)
ERROR: cuda/cudaConvolutionLayer.cpp (238) - Cudnn Error in execute: 8 (CUDNN_STATUS_EXECUTION_FAILED)

What should I do to solve the problem?
appreciate for any help!!! thanks!

1 Like

Hello,

Recommend trying NVIDIA GPU Cloud (NGC) tensorrt optimized containers, which removes many of the host-side dependencies. NVIDIA NGC

I had the same problem. any help?