RTX A6000 CUDA Compatibility

My laboratory recently bought a new computer with an RTX A6000 GPU. We’re looking to use this GPU to train the network described at GitHub - AiviaCommunity/3D-RCAN: Three-dimensional residual channel attention networks. The software was written in TensorFlow 1.13.1, which, as I understand it, is only compatible with CUDA 10.0 (see TensorFlow, CUDA and cuDNN Compatibility - Punn's Deep Learning Blog). However, based on CUDA - Wikipedia, it looks like our RTX A6000 is only compatible with CUDA 11.1-11.7.

Can someone confirm that that our RTX A6000 is not compatible with CUDA 10.0? I’ve tried running the 3D-RCAN code with TensorFlow 1.13.1 and CUDA 10.0, but no GPUs are found. If the network is trained with CUDA 11.2 (and cuDNN v8.1.1.33 installed) and newer versions of TensorFlow (such as 2.9), the GPU is located, but this introduces other issues which I haven’t been able to resolve.

NVIDIA doesn’t recommend using anything earlier than CUDA 11.1 with cc8.6 GPUs including A6000.

It is possible to run codes that were compiled for e.g. CUDA 10.x, but everything has to be done properly in the build process to make this forward compatibility path viable. Furthermore, when there are large libraries involved, as there are with TF, you can experience very long unexpected JIT delays when using this path.