I have set up a gpu instance on aws. The NVIDIA driver is grid k520. I have even installed cuda8.0. and cuDNN 5.1. Now to to install tensorflow-gpu 1.1.0. It shows me error couldn’t find downloads that satisfy the TF-gpu requirements. I was wondering if grid k520 cuda compatible? Kindly Help
All GPUs made by NVIDIA in the past ten years are capable of using CUDA. GRID K520 is a compute capability 3.0 device, which is sufficient for cuDNN 5.1 from what I could find.
cuDNN 5.1 can be used with CUDA 8, best I can tell from a quick internet search (although it seems there may be different versions of cuDNN 5,1, one compatible with CUDA 7.5, the other compatible with CUDA 8.0; make sure you use the appropriate version).
You write that you get an error message that the TensorFlow requirements are not fulfilled, so you would want to figure out what TensorFlow’s GPU requirements are.
This webpage (https://www.tensorflow.org/install/install_linux) indicates that TensorFlow requires cuDNN 6, but it’s not immediately clear what TensorFlow version the page pertains to:
Why is it not showing up on https://developer.nvidia.com/cuda-gpus ?
(1) Generally speaking, the lists of GPUs by CUDA put out by NVIDIA are updated manually. They are rarely complete, and sometimes not up-to-date.
(2) In this particular case, the current CUDA version (11.x) does not support
sm_30 devices any more (that is, those with compute capability 3.0 like the GRID K520). So if this GPU was listed previously it would have been removed by now.
I know that CUDA 9.x still supports
sm_30 devices since I am using that combo here. Best I can tell from perusing online docs, CUDA 10.x should also still support
sm_30. In addition to support in CUDA, you would need a driver package that still supports such GPUs. The most recent driver (for Windows) appears to be from fall 2019.
My recommendation would be to look for newer hardware than Kepler-architecture GPUs.
Thank you … however this architecture was chosen for me by a client I am working with.
Could you advise me on the feasibility of the use of CUDA and cuDNN for training DL models like Yolo . Would it be possible to do the same with some docker containers ? If so which docker container would you recommend ?
My machine is a Ubuntu 20.04 LTS and NVDIA driver installed is
Driver Version : 455.23.05
CUDA Version : 11.1
Attached GPUs : 4
Product Name : GRID K520
This is an AWS instance
This is outside my area of expertise, but per NVIDIA’s documentation it seems you should be able to use cuDNN 8.0.4 with CUDA 10.2 and r440 drivers on an
Relying on obsolete hardware and software is a hassle in my experience (you need to find and maintain properly matched old components for the SW/HW stack), so I re-iterate that I would not recommend going down this path.