Is Nvidia geforce gtx 1650 cuda enabled for pytorch and running ml models for conda? If it can someone help me with the installation and all

I have downloaded the cudatoolkit but when i go to jupyter notebooks and type the command torch.cuda.is_available() i get false as output.Even though i have a gpu of 1650(not Ti)

I also have this problem. The CUDA toolkit v12.1 installer recognizes the hardware, but CUDA doesn’t seem to work on this GPU and it doesn’t appear in the list of CUDA-compatible GPUs. I thought that all recent Nvidia GPUs were CUDA compatible but it seems not to be so.

The latest version of PyTorch only appears to support CUDA 11.8. I uninstalled PyTorch via pip3, and uninstalled all components of Nvidia CUDA Toolkit v12.1 from Control Panel Add/Remove Programs (requiring two restarts to get everything out). Then I installed Nvidia CUDA Toolkit v11.8 and subsequently PyTorch again. After doing this, I checked that

torch.cuda.get_device_name(0)

returned the name of my GPU.

i saw a old post saying from a moderator saying all gpu released are cuda enabled but in the list the only one i cant see is the geforce gtx 1650

is your gpu geforce 1650 too

i am doing this currently in conda environment(miniconda) so it would be nice to know which environment you are using

Yes, 1650, not Ti or anything.
I don’t have conda. Using Python 3.11.3 on Windows 10. Installed Visual Studio, and then added numpy, scipy, matplotlib, and pandas via pip. No other packages. Then Nvidia CUDA Toolkit 11.8, and only then PyTorch via pip3. I had to manually edit environment variables to get VS to recognize my Python installation, adding my Python directory to PATH. Tested on a small ML task and my GPU shows activity in the task manager.

so first cuda then PyTorch u did let me try it and see thank you for the help

I am trying to train yolov8m. I have done all things that you told, still no good.
During the model training the loss values are ‘nan’ and the map values are 0.