So, WSL2 CUDA is broken if you're up to date on WIP Dev Channel then?

I’ve seen it mentioned in response to other threads, so for clarity, if your setup is:
WIP: 21364.co_release.210416-1504
WSL2 Linux Ubuntu20.04 5.4.72-microsoft-standard-WSL2 #1 SMP Wed Oct 28 23:40:43 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Nvidia Driver 470.14
cuda-toolkit-11-0 version 11.0.3-1
WSL2 nvidia-container-toolkit 1.4.2-1
WSL2 Docker-ce 5:20.10.6~3-0~ubuntu-focal

Which would mean you’re on the latest of each component, it’s broken right now?

I don’t mean this to sound glib, I am seeking clarity and confirmation here only.

2 Likes

Same issue here, with the same versions installed. I’m using a GeForce GTX 1060.
I’m trying it on kali-linux (which is not supported), but it fails to detect the GPU. Instead, glxinfo reports mesa drivers with vendor LLVM (which might not be the issue, but I’ve read it’s supposed to report nvidia as the vendor).
Also tried on the latest Ubuntu, which is supposed to be supported, but same result… :/

Same here, impossible to detect gpu now

Small WIP update today from 21364.1000 > 21364.1100 seems to have made no difference.

Is this the same issue reported here or is it separate?

I need to install WSL2 to work with CUDA but I’m hesitant in case I break something that’s hard to fix. I’m on Windows 10 Pro Build 21364.co_release.210416-1504 (freshly updated from a non-insider version).

I am running nbody on Docker Desktop without problem in build 21364. Just make sure:

  1. You are passing the parameter --env NVIDIA_DISABLE_REQUIRE=1 to docker to avoid this bug.

  2. Your GPU is available in WSL2. That happens if /dev/dxg device exists and also if your gpu name appears at the output of glxinfo -B after installing latest mesa on Ubuntu:

    sudo add-apt-repository ppa:kisak/kisak-mesa
    sudo apt update
    sudo apt install mesa

1 Like

I’m using the same build as you, but I’m facing the same problem as described. As for the glxinfo -B here outputs the following:

name of display: :0
display: :0  screen: 0
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
    Vendor: VMware, Inc. (0xffffffff)
    Device: llvmpipe (LLVM 10.0.0, 128 bits) (0xffffffff)
    Version: 20.0.8
    Accelerated: no
    Video memory: 15712MB
    Unified memory: no
    Preferred profile: core (0x1)
    Max core profile version: 3.3
    Max compat profile version: 3.1
    Max GLES1 profile version: 1.1
    Max GLES[23] profile version: 3.1
OpenGL vendor string: VMware, Inc.
OpenGL renderer string: llvmpipe (LLVM 10.0.0, 128 bits)
OpenGL core profile version string: 3.3 (Core Profile) Mesa 20.0.8
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile

OpenGL version string: 3.1 Mesa 20.0.8
OpenGL shading language version string: 1.40
OpenGL context flags: (none)

OpenGL ES profile version string: OpenGL ES 3.1 Mesa 20.0.8
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.10

Try installing latest mesa 21.0.3 following the steps from my previous post.

I forgot to say that I got Nvidia driver version 27.21.14.7025 from Windows Update the same day I upgraded to build 21364. Maybe it’s what’s making it work for me in my GT 710.

For some reason I can’t edit the original post anymore, but here’s an update:
WIP updated to 21370.co_release.210424-1611
Nvidia Driver updated to 470.25
WSL2 nvidia-container-toolkit updated to 1.5.0-1

Issue still occurs as described in the title.

1 Like

New info. See 470.14 - WSL with W10 Build 21343 - NVIDIA-SMI error - #15 by onomatopellan

1 Like

-I managed to find older driver 465.21
-I also succeeded in upgrading mesa to 21.1.0 (21.0.3 was old)
(although I had to type sudo apt upgrade, sudo apt --fix-broken install and sudo apt upgrade again)
So that glxinfo -B finally was reporting me the GPU Name (NVIDIA GEFORCE RTX 2060 Super …)
I also run the basic docker gpu with --env NVIDIA_DISABLE_REQUIRE=1
$ sudo docker run --gpus all --env NVIDIA_DISABLE_REQUIRE=1 nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark

But no way, it says that :
Error: only 0 Devices available, 1 requested. Exiting.

Scheize!