Tesla K80 and discontinued driver support for Kepler

We have a compute cluster with 16 Tesla K80 GPUs, and I understand driver support for the Kepler architecture is being discontinued:

However, even after some research, I am confused whether this applies also to Tesla (data center) products or just GeForce products?

In other words, can we expect to continue to have support for future versions of CUDA beyond 11.4 for the Tesla K80?

Up-to-date CUDA support is essential for us, as our deep learning software stack (TensorFlow etc.) depends on it.

Thanks in advance for any clarification.

This is the definitive Tesla document , which states that R470 will be supported out to 2024 and
" This driver branch supports CUDA 11.x (through CUDA enhanced compatibility).".

Also, in the “Software Support Matrix”, it states the last driver support is R470.

I’m not sure you’ll get official statement from Nvidia here, as to whether Cuda 12.0 will continue support, but looking at the the 11.5 Release Note:

"1.5. Deprecated Features
The following features are deprecated in the current release of the CUDA software. The features still work in the current release, but their documentation may have been removed, and they will become officially unsupported in a future release. We recommend that developers employ alternative solutions to these features in their software.

General CUDA

    NVIDIA Driver support for Kepler is removed beginning with R495. CUDA Toolkit development support for Kepler continues through CUDA 11.x."

and in the “Cuda Libraries” section:

"CUDA Math libraries are no longer shipped for SM30 and SM32.
Support for the following compute capabilities are deprecated for all libraries:

sm_35 (Kepler)
sm_37 (Kepler)
sm_50 (Maxwell)"

I’d speculate that there’s a good chance 11.X is the last stop for Kepler.

Later: Just looked at the “Software Support Matrix” again and under the “Last CUDA Toolkit Support”, column it states “11.x”, not “Ongoing”. So that’s probably it.

1 Like

Thanks so much for the detailed reply.