cuda9 and cuda8 are incompatible with T4 ?

I complier a program rely on cuda9, The program can run normaly on Tesla P40, but get segmentation on Tesla T4. I changle cuda9 to cuda10, P40 and T4 are normal.

Normally CUDA 10 is recommended for Turing architecture GPUs. This is indicated in the CUDA 10.0 release notes.

[url]Release Notes :: CUDA Toolkit Documentation

“CUDA 10.0 adds support for the Turing architecture (compute_75 and sm_75).”

It should be possible to use CUDA 8 or 9 on T4 with proper compilation settings. It’s impossible to say what is happening with this description.

Is CUDA9.1 available for cudnn? i run a program in cuda 9.1 with torch==0.4.1 and the
corresponding cudnn 7.1.3.AS long as i set up cudnn.benchmark ==True ,it reports errors :“RuntimeError: cuda runtime error (11) : invalid argument at /pytorch/aten/src/THC/THCGeneral.cpp:663” Using cudnn with cuda 9.1 ,torch==0.4.1 is very necessary for me .Could you give a solution?My Graphics card is Tesla T4

It sounds like an issue with torch or your python code.

It’s possible to use CUDA 8 or 9 with a T4. I wouldn’t recommend it personally, due to the possibility of JIT compilation of libraries especially, but its possible. compute capability 7.5 devices receive explicit support in CUDA 10.x and beyond. I don’t know what problem you are having, exactly.