Hello,
I would like to use GTX770 for deep learning.
GeForce GTX 770
OS: win10 64bit
Platform: Tensorflow-gpu/ python
I am sure that tensorflow can get the gpu information when I used:
import tensorflow as tf
Sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
However, I checked the gpu information with cmd.exe and it can’t support on CUDA.
>nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2017 NVIDIA Corporation
Built on Fri_Nov__3_21:08:12_Central_Daylight_Time_2017
Cuda compilation tools, release 9.1, V9.1.85
> nvidia-smi
Mon Jul 01 20:57:29 2019
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 388.19 Driver Version: 388.19 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 770 WDDM | 00000000:02:00.0 N/A | N/A |
| 17% 33C P8 N/A / N/A | 109MiB / 4096MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
+-----------------------------------------------------------------------------+
I had tried 3 versions of CUDA: 9.1, 10.0, 10.1
All of them can’t support to GTX 770.
Always shows: Not Supported.
What can I do?
This problem has trapped me for 3 days!!