My python IDLE can not detect any available GPU.
Python version: 3.10.11
Tensorflow: 2.12.0
CUDA version: cuda_12.1.r12.1/compiler.32688072_0
cudnn version: 8.9.2.26
GPU: GTX1660ti
Driver: 535.98
I installed CUDA, then added cudnn folders into CUDA folder (generally pasted bin, lib and include into C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1).
Then I added bin, lib, libnvvp and include (in \CUDA\v12.1) into Path in system variables.
In case I made it wrong, I also added bin, lib and include in C:\Program Files\NVIDIA\CUDNN\v8.9, into the Path.
And also the C:\Program Files\NVIDIA GPU Computing Toolkit\zlib123dllx64\dll_x64, which have a zlibwapi.dll in this path. I added this into the Path.
And I restarted the PC.
And typed this in python IDLE:
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices(‘GPU’)))
And got this:
Num GPUs Available: 0
However, when I tried to calculate something on GPU:
with tf.device(‘/gpu:0’):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name=‘a’)
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name=‘b’)
c = tf.matmul(a, b)
print(c)
It successes.
tf.Tensor(
[[22. 28.]
[49. 64.]], shape=(2, 2), dtype=float32)
I really don’t know what happens. And I really need my python can use my GPU.