Enabling WSL2 on TCC gpu (L4) by switching to MCDM

I’ve been bashing my head against the wall trying to enable gpu usage in wsl2 for a few days now.
I am running Windows Server 2022, and I need to run this as the native OS. So switching to a linux based OS is not an option, neither is running HyperV virtualization with a linux system and giving it access to the GPU (as this will restrict the access to the gpu in the windows system)

The L4 driver does not support WDDM mode (I know there are possibilities through vgpus, but I am not sure this is an option as the system will be running in an airtight environment eventually). Then I figured out that I can also switch the gpu from TCC to MCDM mode and this seems like something that might work out in regard to enabling CUDA inside wsl2.

Switching the gpu into MCDM mode does work - but after restarting to enable the changes the gpu is not functioning properly (nvidia-smi returns an error and the gpu is listed as having an issue in device manager) and does not work untill I reinstall the drivers to put the gpu into TCC mode again.

tried to find some documentation about the MCDM mode and switching to it, but it seems like a very sparsely documented area. So if anyone have some experience or input I would highly appreciate it - Also regarding other ways I might enable CUDA inside wsl2 with the L4 gpu :)

Thanks!

ps. This is not really my area of expertise and most of what I know is learned over the past days trying to solve this problem

Same issue here. Want to use L4 GPU inside docker environment (Docker Desktop 4.38.0) running on Windows 10 (21H1) with WSL2. GPU is detected on host system and can be consumed by e.g. Ollama (directly installed on Windows 10 host, not running as a container).

Trying to run any GPU supported container like

docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark

i get this error

Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy'
nvidia-container-cli: initialization error: WSL environment detected but no adapters were found: unknown.

The reason might be that Tesla GPUs are not supported by WSL2 yet.

If it should be still possible to use L4 resources within WSL2 Docker Desktop let me know.

Hello and thank you for sharing this issue.

MCDM should indeed be a way for those GPU to be visible in WSL2 but we need to verify a couple of things first with your configuration:

  • Make sure you are on the latest version of the NVIDIA Driver first. MCDM enablement is a recent feature and was not available in our earlier drivers (570+ would have the latest bug fixes and support associated with this feature so we would recomand that driver if possible)
  • Enable MCDM via “nvidia-smi -g -dm 2”
  • Check if MCDM works on the host system (Native Windows) first by running a simple CUDA apps. If you encounter any issue at this step or if there is a yellow bang on your GPU sharing a dxdiag file here would help us.

For the WSL portion there are some OS dependencies as well:

  • The latest version of Windows 11 or Windows Server 2025 should have all the patches needed for MCDM GPU Exposure in WSL2.
  • If you are on Windows 11 or Windows Server 2025 and your GPU is in MCDM mode but you still do not see them in WSL check that you are using WSL2.

Best regards,

Thanks for response,

After following your recommendation, I got a yellow bang on the NVIDIA L4 DisplayAdapter.
For my setup the -g argument was not available any more…

NVIDIA-SMI version  : 572.13
NVML version        : 572.13
DRIVER version      : 572.13
CUDA Version        : 12.8


Found here.
Therefore, I had to execute the following…

 nvidia-smi -i 0 -dm 2

After reboot my card got this yellow bang.

DxDiag.txt (124.0 KB)

I was unable to switch back and any command gave the following…

NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running. This can also be happening if non-NVIDIA GPU is running as primary display, and NVIDIA GPU is in WDDM mode.

Got it back to work by uninstalling it from windows device manager followed by executing driver setup. Now its again in TCC Mode.

Mon Feb 17 15:39:32 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 572.13                 Driver Version: 572.13         CUDA Version: 12.8     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                  Driver-Model | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA L4                    TCC   |   00000000:01:00.0 Off |                    0 |
| N/A   59C    P8             12W /   72W |       9MiB /  23034MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|  No running processes found                                                             |
+-----------------------------------------------------------------------------------------+

Regards,
Harry

Thank you for the response,

Similarily to nvharry, after setting the driver in MCDM mode I get a yellow bang on the NVIDIA l4 DisplayAdapter.

NVIDIA-SMI version  :  572.13
NVIDIA-SMI version  :  572.13
CUDA version  :  12.8

This is the yellow bang that appeared after reboot:
DxDiag.txt (145.6 KB)

The OS is running Windows Server 2022 (21H2), so this might be an issue with recognizing the gpu in MCDM mode if MCDM mode is only available for Windows Server 2025?

Best Regards,
Steinar