Does the latest GTX 1660 model support cuda?

Just bought the GTX 1660 gpu, currently this model is not in the official website support list, may you support this model will support cuda?thanks.

Request help

If you download the latest driver package and the latest CUDA version, CUDA should work just fine on the GTX 1660. Have you tried that? What happened?

In general, NVIDIA doesn’t release new hardware without having software support in place, as there would be no point in doing so.

Thank you for your reply. I haven’t tried to run it yet. I just bought 1660 today, and then I went to the official website to inquire about it. So I sent this consultation. I’m going to try it tomorrow.

All GPUs NVIDIA has produced over the last decade support CUDA, but current CUDA versions require GPUs with compute capability >= 3.0. The Turing-family GeForce GTX 1660 has compute capability 7.x.

The parts of NVIDIA’s website that explicitly list supported models are often not updated in a timely fashion.

2 Likes

On Ubuntu 16.04, I verified that CUDA works on GTX 1660. But cuDNN does not work! PyTorch / Tensorflow throw cuDNN initialization errors.
Is this a known issue? If so, when is it expected to be fixed?

You’ll need to use a CUDNN version that was released after the GTX1660 was released.

[url]https://github.com/tensorflow/tensorflow/issues/27144[/url]

note from above:

“CUDNN verified to be working correctly with simple CUDNN programs”

Hi, we faced the same issue with 1660 and cudnn 7.6.0. As I understand, cudnn 7.6.0. released after 1660. Why it’s not working?

The error is on initialization:

Error : Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.

On the same PC in the same time 2080 RTX works ok.

Deleted

That error is coming from Tensorflow.

I suggest you ask questions about cudnn on the cudnn forum.

https://devtalk.nvidia.com/default/board/305/cudnn/

Also, you should search for Tensorflow issue reports. many users report being able to fix this with “allow growth=true”

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)

Do some searching. This is not a Tensorflow support forum. I won’t be able to respond to further questions about tensorflow problems on this forum.

If you want to verify that CUDNN is working correctly with your gtx 1660, then run the CUDNN sample codes provided by NVIDIA. If they work correctly, then CUDNN is working correctly on your GTX 1660, and you will need to investigate problems reported by Tensorflow as Tensorflow issues.

Thanks, sorry.
Samples passed, seems indeed its from Tensorflow and it is not obvious from error. Thanks.

just for those who would be interested.
I just bought RTX 2060 SUPER, and after struggling alot, the allow-growth option did solve the problem of “Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D}} = Conv2D[T=DT_FLOAT, data_format=“NCHW”, dilations=[1, 1, 1, 1], padding=“SAME”, strides=[1, 1, 2, 2], use_cudnn_on_gpu=true, _device=”/job:localhost/replica:0/task:0/device:GPU:0"]"

This is the working combination:
nvidia driver 430.34
CUDA 10.0 (not 10.1)
cuDNN v7.6.0 (May 20, 2019), for CUDA 10.0
UBUNTU 18.04

Cheers,

do you know the specific compute capability of gtx 1660? dose it >= 7.5?

I haven‘t googled this data


Good resources (outside of NVIDIA’s website) for this kind of information are the TechPowerUp database and Wikipedia.

The information at both sites agrees that the GTX 1660 has compute capability 7.5 (Turing architecture). As both the TechPowerUp database and Wikipedia are maintained by volunteers, errors and omissions are possible, however in my experience the information at the two linked sites is reliable, especially when in agreement.

hey which cuda version is suitable did you got it?

after messing around with this for like a week, some how it started working with cuda version 11.8. I’ll chock it up to starting a venv with python 3.9 and just installing pytorch right after it took like 45 minutes to get a package together, but finally did. I also downloaded the cuDNN whatever the latest one is and added the files ( copy and paste ) to the respective folders in the cuda toolkit folder. then added the 2 folders to the path: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2\extras\CUPTI\include , C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.2\extras\CUPTI\lib64 . I guess I should say that I also have the 12.2 version of the cuda toolkit installed too. I don’t think that it has any impact on it or maybe its what finally made it work. nothing really started happing until I added the 2 folders to the path tho. So I don’t know if it was just coincidence or the reason it all worked. I should also say that I am running a 1660 super, and just did a clean install of the latest drivers as of 30 Sep 23. I don’t think anyone will be seeing this in the future but if so, hope it helps. And if its me looking for how to do it again in the future because I forgot how, just know its possible and you did it once :-)

This is some crap. I just found this video and this guy walks you though setting it up and it’s just about everything I just did.