GPU type: Tesla T4
nvidia driver version: NVIDIA-SMI 415.27
CUDA version: 10.0
CUDNN version: 7.1.1
TensorRT version: 188.8.131.52
I used to test my program on 1060 Ti ,and now i build and run my program on Tesla T4 .
but my program core dump on T4:ERROR: cuda/cudaConvolutionLayer.cpp (163) - Cudnn Error in execute: 8
The arch is different between these gpu type,if it make this problem?
and i haved add CUDA_ARCH = -gencode arch=compute_75,code=sm_75 into caffe’s Makefile.config for Tesla T4.
but it not work!
what is the meaning of Cudnn Error in execute: 8?