CUDA-Enabled GeForce 1650?

I have a 1650 GPU graphics card and have installed all the drivers and necessary components, but CUDA is not working. What do I need to do? “1650 with max-q design”

Same experience; have the GTX 1650 and CUDA installed but not working! Was hoping to get some pointers if this is possible, but no conclusive answers so it seems

1 Like

Same for me.

torch.cuda.is_available()
False

GeForce GTX1650 (it lists 1024 CUDA cores)
NVCUDA64.dll v31.0.15.3114

1 Like

any solution to this yet?

is any one figured this out?

Hey everyone,
I am a fresher data scientist. I was trying to do model training of Yolov8m model on my system, that has a GTX 1650. Cuda 12.1 was installed with pytorch and its showing when I do the version check, but still while training the model it is not supporting and the loss values are ‘nan’ and map values are 0. I even tried installing the cuda toolkit 12.3 from Nvidia, still no good.
Can someone please guide me what the problem is and how I can solve it.
Thank you.

I confirm GPU is working with CUDA for GeForce 1650. (tested with TensorFlow following the a bit outdated “Native Windows” TensorFlow installation (but didn’t succeed with WSL on newer version of TensorFlow))
Nvidia staff should update its list.

I just installed the desktop version of the GTX 1650. My goal is to use it for the FastAI course, pytorch, and general cuda+python coding to learn. Can anyone who got it to work tell me if there is a specific version I need to download?
I am using this version from the nvidia site: cuda_12.4.0_551.61_windows.exe

i have seen here: here
that GTX 1650 is compatible with CUDA 7.5