Fix: “Torch not compiled with CUDA enabled” in Automatic1111 on RTX 5090 (Windows)
This is a complete, reproducible fix for getting Automatic1111 Stable Diffusion WebUI to use the GPU on an RTX 5090.
It captures the exact errors I hit, why they happened, and the step‑by‑step commands that solved them.
TL;DR (Quick Fix)
- Activate your WebUI venv (mine is
E:\Automatic111\sd-venv312
):
E:\Automatic111\stable-diffusion-webui> call E:\Automatic111\sd-venv312\Scripts\activate
- Clean old Torch installs & cache:
pip uninstall -y torch torchvision torchaudio xformers
pip cache purge
- Install PyTorch nightly with CUDA 12.8 (sm_120 support for RTX 50‑series):
pip install --no-cache-dir --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu128
- Verify GPU is detected:
python -c "import torch,torchvision; print('torch',torch.__version__,'cuda',getattr(torch.version,'cuda',None)); \
print('avail',torch.cuda.is_available()); print('name', torch.cuda.get_device_name(0) if torch.cuda.is_available() else 'NO GPU'); \
print('cap', torch.cuda.get_device_capability(0) if torch.cuda.is_available() else None)"
You should see something like:
torch 2.9.0.dev20xx+cu128 cuda 12.8
avail True
name NVIDIA GeForce RTX 5090
cap (12, 0)
- Launch WebUI with a simple
webui-user.bat
(no extra Torch commands, no skip‑cuda‑test):
set COMMANDLINE_ARGS=--opt-sdp-attention
call webui.bat
If the UI shows steps running ~20–30 it/s and no “CUDA not enabled” errors, you’re good.
My Environment (when it failed & then worked)
- Windows
- GPU: NVIDIA GeForce RTX 5090
- Python: 3.10.11 (64‑bit)
- WebUI: v1.10.1 (
82a973c0...
)
- Venv:
E:\Automatic111\sd-venv312
- Final working Torch/TV:
torch 2.9.0.dev...+cu128
torchvision 0.24.0.dev...+cu128
- CUDA runtime reported by Torch: 12.8
Note: cu124 (CUDA 12.4) does not include sm_120
for RTX 50‑series, so those wheels either warn about unsupported capability or fall back to CPU. Nightly cu128 wheels do include sm_120
support.
The Errors I Saw
1) CPU build or CUDA disabled
From WebUI and terminal:
AssertionError: Torch not compiled with CUDA enabled
Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS ...
and
torch 2.8.0+cpu cuda None is_available False
2) Older CUDA (12.4) wheels on a 5090
UserWarning:
NVIDIA GeForce RTX 5090 with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 ... sm_90.
This means those wheels don’t include sm_120, so the 5090 won’t be used.
Root Cause (Why it Broke)
- I had CPU‑only or older CUDA (cu124) Torch/TV wheels installed.
- RTX 5090 requires sm_120 support, which currently ships in nightly CUDA 12.8 wheels (
cu128
).
- WebUI’s auto‑install / custom index settings can sometimes pull the wrong wheels (CPU or older CUDA).
The Full Fix (Step by Step)
Paths below are mine; adjust for your setup.
0) Open a fresh terminal and activate the correct venv
call E:\Automatic111\sd-venv312\Scripts\activate
Confirm you’re in the venv:
python -c "import sys; print(sys.executable)"
Expected:
E:\Automatic111\sd-venv312\Scripts\python.exe
1) Remove bad installs and cache
pip uninstall -y torch torchvision torchaudio xformers
pip cache purge
2) Install nightly cu128 wheels (have sm_120
for 50‑series)
pip install --no-cache-dir --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu128
3) Sanity‑check GPU from Python
python -c "import torch,torchvision; print('torch',torch.__version__); print('torchvision',torchvision.__version__); \
print('cuda?',torch.cuda.is_available()); print('cuda runtime',getattr(torch.version,'cuda',None)); \
print(torch.cuda.get_device_name(0) if torch.cuda.is_available() else 'NO GPU'); \
print('cap', torch.cuda.get_device_capability(0) if torch.cuda.is_available() else None)"
I got:
torch 2.9.0.dev...+cu128
torchvision 0.24.0.dev...+cu128
cuda? True
cuda runtime 12.8
NVIDIA GeForce RTX 5090
cap (12, 0)
4) Keep WebUI from re‑installing the wrong Torch
Use a minimal webui-user.bat
. Mine looks like this:
@echo off
rem --- Use the venv that already has the correct Torch installed ---
set PYTHON=E:\Automatic111\sd-venv312\Scripts\python.exe
set VENV_DIR=E:\Automatic111\sd-venv312
rem --- Do NOT force torch installs here ---
set TORCH_COMMAND=
rem --- Clean, safe args (no skip-cuda-test needed) ---
set COMMANDLINE_ARGS=--opt-sdp-attention
rem --- Nuke any custom pip index URLs that could fetch CPU/old wheels ---
set TORCH_INDEX_URL=
set PIP_INDEX_URL=
set PIP_EXTRA_INDEX_URL=
call webui.bat
If you must install through WebUI, set TORCH_COMMAND
to the nightly cu128 line:
set TORCH_COMMAND=pip install --no-cache-dir --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cu128
But I prefer keeping it empty once I’ve installed the right wheels in the venv.
5) Launch and confirm it’s using the GPU
On launch I see:
Applying attention optimization: sdp... done.
Model loaded in 3.1s ...
20/20 [00:00<00:00, 21–27 it/s]
That iteration speed is GPU‑level. No more CUDA errors.
What Didn’t Work (and Why)
- cu124 wheels (Torch 2.6.0+cu124, TV 0.21.0+cu124) → missing
sm_120
, so 5090 prints warnings and/or falls back to CPU.
- CPU wheels (torch 2.8.0+cpu) →
torch.cuda.is_available()
is False
and WebUI throws “not compiled with CUDA”.
- Adding
--skip-torch-cuda-test
→ just hides the problem; it doesn’t enable GPU.
Optional Notes
- xFormers is optional. With modern GPUs, PyTorch SDPA (
--opt-sdp-attention
) is fast and stable.
- The TF32 warning from PyTorch 2.9 is harmless; it’s just a heads‑up about a future API change.
- If you ever slip back to CPU, rerun the uninstall + purge + cu128 install steps above.
Log Snippets (for searchability)
Failure (CPU / no CUDA):
AssertionError: Torch not compiled with CUDA enabled
Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
torch 2.8.0+cpu cuda None is_available False
Failure (old CUDA 12.4 on 5090):
UserWarning:
NVIDIA GeForce RTX 5090 with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90.
Success:
torch 2.9.0.dev...+cu128 cuda 12.8
avail True
name NVIDIA GeForce RTX 5090
cap (12, 0)
Applying attention optimization: sdp... done.
... 20/20 [00:00<00:00, 21–27 it/s]
Credit / Context
This write‑up is distilled from a live troubleshooting session.
If it helps you, consider replying with your exact GPU / driver / Torch versions so others can compare.