CUDA on WSL 2 Ubuntu 20.04 unable to detect any GPUs

After following a few guides to install the WSL 2 version of CUDA, I am still unable to get any devices to show up in WSL.

I have followed the guide on 1. NVIDIA GPU Accelerated Computing on WSL 2 — CUDA on WSL 12.3 documentation to no success.

I have also followed this thread Failure to install CUDA on WSL 2 Ubuntu - #8 by davidhsv to prevent a supposed “shadow” version of nvidia drivers from being installed on the Linux VM that’s used for WSL.

I receive this error still.

~/NVIDIA_CUDA-11.2_Samples/1_Utilities/deviceQuery$ ./deviceQuery
./deviceQuery Starting…

CUDA Device Query (Runtime API) version (CUDART static linking)

cudaGetDeviceCount returned 35
→ CUDA driver version is insufficient for CUDA runtime version
Result = FAIL

nvidia-smi on the host Windows machine returns this:

±----------------------------------------------------------------------------+
| NVIDIA-SMI 465.21 Driver Version: 465.21 CUDA Version: 11.3 |
|-------------------------------±---------------------±---------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce GTX 105… WDDM | 00000000:01:00.0 On | N/A |
| 51% 56C P0 N/A / 75W | 644MiB / 4096MiB | 13% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 1 GeForce RTX 3090 WDDM | 00000000:4D:00.0 On | N/A |
| 0% 60C P0 133W / 449W | 1651MiB / 24576MiB | 7% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

Here is what happens when I attempt to run Nbody docker cuda-sample:

docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Unable to find image ‘nvcr.io/nvidia/k8s/cuda-sample:nbody’ locally
nbody: Pulling from nvidia/k8s/cuda-sample
22dc81ace0ea: Pull complete
1a8b3c87dba3: Pull complete
91390a1c435a: Pull complete
07844b14977e: Pull complete
b78396653dae: Pull complete
95e837069dfa: Pull complete
fef4aadda783: Pull complete
343234bd5cf3: Pull complete
d1e57bfda6f0: Pull complete
c67b413dfc79: Pull complete
529d6d22ae9f: Pull complete
d3a7632db2b3: Pull complete
4a28a573fcc2: Pull complete
71a88f11fc6a: Pull complete
11019d591d86: Pull complete
10f906646436: Pull complete
9b617b771963: Pull complete
6515364916d7: Pull complete
Digest: sha256:aaca690913e7c35073df08519f437fa32d4df59a89ef1e012360fbec46524ec8
Status: Downloaded newer image for nvcr.io/nvidia/k8s/cuda-sample:nbody
docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request: unknown.
ERRO[0011] error waiting for container: context canceled

Any recommendations? I’ve been pulling my hair out trying to get this to work to run some tensorflow models on my new 3090 that seems to have compatibility issues with any containers for tensorflow below 20.12 which is the current release, and can’t get things to run at all.

Please help.

Bump…

I am getting exactly the same error. I have an RTX3070. Is there any resolution for this error?

Same here. Any ideas?

You might want to ask WSL-specific questions in the dedicated subforum: