RTX A6000 Cuda Support

I noticed that the RTX A6000 is only supported by CUDA 460 right now. Are there plans to roll out support soon for earlier CUDA versions? It would be very helpful to use this GPU for earlier (and even current) versions of PyTorch, and Tensorflow.

1 Like

With respect to the current Tensorflow, you might find this thread helpful: Tensorflow1.14 is not working on RTX3090 inside the Docker container of Ubuntu18.04 and CUDA10.0 with Python2 , if I understand your question correctly.

Not sure what you mean by “CUDA 460”. The latest CUDA version is 11.2. I assume you mean driver package version 460.x? Generally speaking, based on historical observation, NVIDIA does not back port support for newer hardware to older software branches. At least I cannot recall such a case. NVIDIA’s latest drivers support the latest hardware plus all older hardware typically going back some five to six years, so there is usually no reason not to install the latest production drivers, unless one runs with outdated hardware.

In as far as there are dependencies of other products on the CUDA version, the productive way forward is to make feature requests of the PyTorch and Tensorflow creators to support the latest CUDA version.

Yeah driver package version 460.x was what I meant. This answers my question pretty well, I wasn’t familiar with how NVIDIA rolled out its drivers with respect to its hardware.

Thanks for the help, it cleared things up.

You might also find this article of value: How To Install TensorFlow 1.15 for NVIDIA RTX30 GPUs (without docker or CUDA install)