WSL2 backed docker containers can't see GPU's

Hello everyone,

I am trying to use WSL2 backed docker containers to run my machine learning experiments. I am struggling to get this working. I have re-installed Windows and followed step 2 and step 3(option 1) from 1. NVIDIA GPU Accelerated Computing on WSL 2 — CUDA on WSL 12.3 documentation

Machine:
Processor: AMD Ryzen Threadripper
(Qty 4) - NVIDIA GeForce RTX 2080Ti

Windows:
Edition: Windows 11 Pro
Version: 21H2
OS Build: 22000.1098

Checks:
“wsl -l -v” shows that it is WSL2 running

The installed Nvidia Driver Version is 522.25
The CUDA version is 11.8

In the WSL2 Ubuntu prompt; running “nvidia-smi” yields this screen:

±----------------------------------------------------------------------------+
| NVIDIA-SMI 520.56.05 Driver Version: 522.25 CUDA Version: 11.8 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce … On | 00000000:03:00.0 Off | N/A |
| 30% 43C P8 22W / 250W | 0MiB / 11264MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 1 NVIDIA GeForce … On | 00000000:21:00.0 Off | N/A |
| 30% 43C P8 21W / 250W | 615MiB / 11264MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 2 NVIDIA GeForce … On | 00000000:4A:00.0 Off | N/A |
| 30% 35C P8 7W / 250W | 8MiB / 11264MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+
| 3 NVIDIA GeForce … On | 00000000:4B:00.0 Off | N/A |
| 30% 40C P8 18W / 250W | 0MiB / 11264MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------+

Docker Desktop 4.12.0 (85629) is installed.

To test it with a docker container I tried both:
docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody
and
docker run --env NVIDIA_DISABLE_REQUIRE=1 --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody
both yield this result, unable to find the GPU’s:

Run “nbody -benchmark [-numbodies=]” to measure performance.
-fullscreen (run n-body simulation in fullscreen mode)
-fp64 (use double precision floating point values for simulation)
-hostmem (stores simulation data in host memory)
-benchmark (run benchmark to measure performance)
-numbodies= (number of bodies (>= 1) to run in simulation)
-device= (where d=0,1,2… for the CUDA device to use)
-numdevices= (where i=(number of CUDA devices > 0) to use for simulation)
-compare (compares simulation results running once on the default GPU and once on the CPU)
-cpu (run n-body simulation on the CPU)
-tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

Error: only 0 Devices available, 1 requested. Exiting.

I have also run a variety of tensorflow containers and these containers can just see the CPU.

I have tried with the BIOS secure boot both on and off.

It seems like this should be easy but I am struggling to get this to work, I would be grateful for any advice of what to check or try next.

Tengo el mismo problema
Estoy seguro que es un problema de driver, logre que funcione por un momento al cambiar los driver de nvidia entre el gamer y studio