CUDA and GPU compatibility - GEFORCE GTX TI GPU

I just got a GEFORCE GTX 1660 TI GPU. I installed CUDA and cuDNN. I installed tensorflow-gpu. However tensorflow is still running on the CPU. Please please help.

-Laura

Hi Laura,

Can you share the output of nvidia-smi to double check that your GPUs/CUDA are setup properly? Also please share the following info about your environment:

  • CUDA version
  • CUDNN version
  • tensorflow version (pip freeze | grep -i tensorflow)
  • tensorflow-gpu version (pip freeze | grep -i tensorflow)
  • Operating System version
  • Python version

One common solution to these kinds of problems is to make sure your GPUs are exposed to the CUDA/TF libraries like so if running on a linux machine (bash). I believe this would be setup correctly by default if you rebooted your machine, but here’s how you’d do it manually:

# This is for a single-gpu machine. 0,1 for 2 gpus, 0,1,2,3 for 4 gpus, etc.
export CUDA_VISIBLE_DEVICES=0

Then try running your tensorflow code again to see if it uses your GPUs.

It’s also possible that you have an unsupported combination of CUDA + Tensorflow, which is defaulting to CPU. For example, there might not be any tensorflow pip packages built for CUDA 10.2 yet, so tensorflow would default to CPU. You can resolve a lot of these config/version headaches by using nvidia-docker containers, such as these which will have everything setup for you inside the container: https://ngc.nvidia.com/catalog/containers/nvidia:tensorflow

Related resources: