Trouble with RTX 2080 Super and Cuda / Torch

I am trying to get my GPU (2080 Super) working with Pytorch and Cuda for AI/ML, but seem to be having some difficulties.

I am running Python 3.10.11, and have tried rolling Pytorch back to a version that supports Cuda 10.2 (which as far as I have been able to find is the most recent version of Cuda that supports my GPU). I did not have success with installing the two as a bundle, but was able to install them independent of each other. Yet they still don’t work.

I keep getting an error stating that Torch was compiled without Cuda enabled.

torch = 1.12.1
torchvision = 0.13.1
torchaudio = 0.12.1
cuda = 10.2 (12.1 didnt work either)
python = 3.10.11
OS = Windows 11 22H2

Thank you in advance for any assistance you can provide.

I don’t think you’ll find many pytorch experts on this forum. A recommended place to ask pytorch questions is here.

Your GPU is supported by most recent versions of CUDA including 10.x, 11.x, and 12.x

There are different versions of pytorch. Some are GPU-enabled, some aren’t. If you get that message, you have selected the wrong version of pytorch.

I recommend following the procedure here.

In all of this, a suggestion I have is to make sure to install the latest driver for your GPU. That will give you the most flexibility in terms of CUDA support.