Does the latest GTX 1660 model support cuda?

Just bought the GTX 1660 gpu, currently this model is not in the official website support list, may you support this model will support cuda?thanks.

Request help

If you download the latest driver package and the latest CUDA version, CUDA should work just fine on the GTX 1660. Have you tried that? What happened?

In general, NVIDIA doesn’t release new hardware without having software support in place, as there would be no point in doing so.

Thank you for your reply. I haven’t tried to run it yet. I just bought 1660 today, and then I went to the official website to inquire about it. So I sent this consultation. I’m going to try it tomorrow.

All GPUs NVIDIA has produced over the last decade support CUDA, but current CUDA versions require GPUs with compute capability >= 3.0. The Turing-family GeForce GTX 1660 has compute capability 7.x.

The parts of NVIDIA’s website that explicitly list supported models are often not updated in a timely fashion.

1 Like

On Ubuntu 16.04, I verified that CUDA works on GTX 1660. But cuDNN does not work! PyTorch / Tensorflow throw cuDNN initialization errors.
Is this a known issue? If so, when is it expected to be fixed?

You’ll need to use a CUDNN version that was released after the GTX1660 was released.

https://github.com/tensorflow/tensorflow/issues/27144

note from above:

“CUDNN verified to be working correctly with simple CUDNN programs”

Hi, we faced the same issue with 1660 and cudnn 7.6.0. As I understand, cudnn 7.6.0. released after 1660. Why it’s not working?

The error is on initialization:

Error : Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.

On the same PC in the same time 2080 RTX works ok.

Deleted

That error is coming from Tensorflow.

I suggest you ask questions about cudnn on the cudnn forum.

https://devtalk.nvidia.com/default/board/305/cudnn/

Also, you should search for Tensorflow issue reports. many users report being able to fix this with “allow growth=true”

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)

Do some searching. This is not a Tensorflow support forum. I won’t be able to respond to further questions about tensorflow problems on this forum.

If you want to verify that CUDNN is working correctly with your gtx 1660, then run the CUDNN sample codes provided by NVIDIA. If they work correctly, then CUDNN is working correctly on your GTX 1660, and you will need to investigate problems reported by Tensorflow as Tensorflow issues.

Thanks, sorry.
Samples passed, seems indeed its from Tensorflow and it is not obvious from error. Thanks.

just for those who would be interested.
I just bought RTX 2060 SUPER, and after struggling alot, the allow-growth option did solve the problem of “Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D}} = Conv2D[T=DT_FLOAT, data_format=“NCHW”, dilations=[1, 1, 1, 1], padding=“SAME”, strides=[1, 1, 2, 2], use_cudnn_on_gpu=true, _device=”/job:localhost/replica:0/task:0/device:GPU:0"]"

This is the working combination:
nvidia driver 430.34
CUDA 10.0 (not 10.1)
cuDNN v7.6.0 (May 20, 2019), for CUDA 10.0
UBUNTU 18.04

Cheers,