NVIDIA Quadro K2200 GPU Not Detected in TensorFlow or PyTorch Despite Following All Setup Steps

Hello NVIDIA community,

I am currently facing an issue with my NVIDIA Quadro K2200 GPU, which is not being detected by deep learning frameworks like TensorFlow and PyTorch on my PC. I’ve followed all the recommended steps for setting up my GPU, but it still doesn’t seem to work for my projects. Here’s a detailed overview of my setup and the steps I’ve taken:


System and Software Details:

  • GPU: NVIDIA Quadro K2200
  • Driver Version: 31.0.15.5222
  • CUDA Version: 10.2 (nvcc shows release 10.2, V10.2.89)
  • OS: Windows 10, 64-bit
  • Deep Learning Frameworks:
    • TensorFlow 2.4.0 (installed with pip)
    • PyTorch with CUDA Toolkit 10.2 (installed via Conda)

Steps I Have Already Tried:

  1. Installed Latest NVIDIA Drivers:
  • Verified my driver version matches the compatibility with CUDA 10.2.
  • Checked the GPU in Device Manager, and it is listed under Display Adapters.
  1. Installed CUDA Toolkit 10.2:
  • Confirmed installation using nvcc --version.
  1. Installed cuDNN:
  • Downloaded the appropriate cuDNN version for CUDA 10.2.
  1. Environment Setup:
  • Created a new environment in Miniconda specifically for deep learning projects.
  • Installed TensorFlow 2.4.0 and PyTorch (with cudatoolkit=10.2).
  1. Troubleshooting GPU Usage:
  • Checked GPU availability in TensorFlow using:

python

Copy code

import tensorflow as tf
print("Num GPUs Available:", len(tf.config.list_physical_devices('GPU')))

Result: Num GPUs Available: 0

  • Checked GPU availability in PyTorch using:

python

Copy code

import torch
print("Is CUDA available:", torch.cuda.is_available())

Result: Is CUDA available: False
6. Verified GPU Status with nvidia-smi:

  • Ran nvidia-smi, and it successfully displayed the GPU’s information and utilization.
  1. Closed Conflicting Processes:
  • Ensured no other processes were using the GPU (verified via Task Manager and nvidia-smi).

Current Issue:

Despite following all the steps mentioned above, my deep learning frameworks are still unable to detect the GPU. Both TensorFlow and PyTorch report that no GPU is available, even though the system recognizes the GPU (confirmed through Device Manager and nvidia-smi).


Questions for the Community:

  1. Are there any additional steps I should try to resolve this issue?
  2. Could this be a compatibility problem with the Quadro K2200 and modern frameworks like TensorFlow or PyTorch?
  3. Are there specific versions of TensorFlow or PyTorch that work better with CUDA 10.2 and the Quadro K2200?

Any help or suggestions would be greatly appreciated! Thank you in advance for your time and support.

K2200 is a Kepler cc3.0 GPU. Pretty old. Not supported by current versions of CUDA, TF, pytorch, or most other frameworks. So, yes, current “modern” frameworks have a compatibility problem with this old GPU.

You would have to go to very old versions of TF, and pytorch. The newest version of CUDA you could possibly use is 10.2 Even if you get something working on an “old” stack, what you’re going to discover is that modern codes in pytorch or similar will not work; they require newer versions of pytorch (for example).

Here is a typical example of what is needed.

This doesn’t appear to be the case, despite the deceptive “K” in the name, although you may well have better access to info - the Nvidia data sheet avoids mentioning arch. or CC info.

An HP data sheet and the ubiquitous TechPowerup say it’s Maxwell CC5.0.

Yes, I made a mistake it seems.

cc5.0 GPUs may still have support issues with current pytorch/TF, but you should not be limited to CUDA 10.2

If in doubt, you could run deviceQuery sample code to confirm the compute capability.

in pytorch you can do: print(torch.cuda.get_device_capability()

note this

From 0.3.1 onward, cuda capability 5.0 will not be included in the pre-packaged binary release (so all torch.cuda related stuff will not work).
You will be able to get pytorch to work with such architecture by compiling from source (so all operations will work).

So it seems that for even for cc5.0, if you are trying to install from binaries for pytorch, you are going to be relegated to a pretty old version. Probably best to build pytorch with the desired compute capability , and the previous link I gave indicated how to do that.

I’m not a pytorch expert, probably won’t be able to help further.