docker run --rm --gpus all nvidia/cuda:11.0-cudnn8-devel-ubuntu18.04
Unable to find image 'nvidia/cuda:11.0-cudnn8-devel-ubuntu18.04' locally
11.0-cudnn8-devel-ubuntu18.04: Pulling from nvidia/cuda
171857c49d0f: Pull complete
419640447d26: Pull complete
61e52f862619: Pull complete
2a93278deddf: Pull complete
c9f080049843: Pull complete
8189556b2329: Pull complete
c306a0c97a55: Pull complete
4a9478bd0b24: Pull complete
19a76c31766d: Pull complete
Digest: sha256:11777cee30f0bbd7cb4a3da562fdd0926adb2af02069dad7cf2e339ec1dad036
Status: Downloaded newer image for nvidia/cuda:11.0-cudnn8-devel-ubuntu18.04
docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:495: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request: unknown.
in addition :
root@DESKTOP-N9UN2H3:/mnt/c/Program Files/cmder# nvidia-smi
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
Failed to properly shut down NVML: Driver Not Loaded
(I’m using windows 10 build 21376co_release.210503-1432
On the host I have installed the nvidia driver vers. 470.14 and inside WSL2 I have ubuntu 20.04.
the workaround is to setup : NVIDIA_DISABLE_REQUIRE=1? what happens if I disable NVIDIA ? which function I will lose ? It does not seem a workaround if I disable the function. I’ve used the standalone version of docker for Windows and it worked. With that I can run correctly the containers with GPU support. This seems to be the real workaround.
No, the workaround is installing the previous libraries version with sudo apt-get install nvidia-docker2:amd64=2.5.0-1 nvidia-container-runtime:amd64=3.4.0-1 nvidia-container-toolkit:amd64=1.4.2-1 libnvidia-container-tools:amd64=1.3.3-1 libnvidia-container1:amd64=1.3.3-1
NVIDIA_DISABLE_REQUIRE=1 doesn’t disable anything important, it just ignores the CUDA version check. It’s needed because in WSL2 the CUDA version is always incorrectly reported as version 11 by docker.
I’m using Docker Desktop 3.3.1 and GPU works because it uses older nvidia libraries. You may need NVIDIA_DISABLE_REQUIRE=1 depending of the docker image you are running.
NVIDIA-SMI has failed because it couldn’t communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.
Failed to properly shut down NVML: Driver Not Loaded
root@DESKTOP-N9UN2H3:/mnt/c/Program Files/cmder# docker run --rm --gpus all nvidia/cuda:11.0-cudnn8-devel-ubuntu18.04
docker: Error response from daemon: dial unix /mnt/wsl/docker-desktop/shared-sockets/guest-services/docker.sock: connect: no such file or directory.
See ‘docker run --help’.
nvidia-smi is broken and next driver update should fix it.
It looks like you have both the Nvidia docker and Docker Desktop. You can’t use both at the same time. Go to Docker Desktop options RESOURCES → WSL INTEGRATION and disable docker for the WSL2 distro you are running and try again.