Windows 11 WSL2 CUDA (Windows 11 Home 22000.708, Nvidia Studio Driver 512.96)

Hello. Does anyone have experience using CUDA in WSL 2 on Windows 11?
Is it normal to receive such messages?

~/.local/bin$ python3 -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
2022-06-15 12:26:38.641299: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-06-15 12:26:38.663494: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-06-15 12:26:38.663891: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-06-15 12:26:38.664215: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-06-15 12:26:38.665784: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-06-15 12:26:38.666145: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-06-15 12:26:38.666447: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-06-15 12:26:39.086758: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-06-15 12:26:39.087183: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-06-15 12:26:39.087209: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Could not identify NUMA node of platform GPU id 0, defaulting to 0.  Your kernel may not have been built with NUMA support.
2022-06-15 12:26:39.087578: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:961] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-06-15 12:26:39.087640: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1532] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1629 MB memory:  -> device: 0, name: NVIDIA GeForce RTX 3050 Laptop GPU, pci bus id: 0000:01:00.0, compute capability: 8.6
tf.Tensor(49.4563, shape=(), dtype=float32)
1 Like

I have EXACTLY same problem.

How did you install every component?

2 Likes

Good afternoon. I apologize for the delay in response.

  1. Installing WSL2

wsl.exe --install
wsl.exe --update

  1. Update WSL Ubuntu

sudo apt-get update && sudo apt-get upgrade

  1. Installing CUDA in WSL

sudo apt-key del 7fa2af80

wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin
sudo mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/11.6.2/local_installers/cuda-repo-wsl-ubuntu-11-6-local_11.6.2-1_amd64.deb
sudo dpkg -i cuda-repo-wsl-ubuntu-11-6-local_11.6.2-1_amd64.deb
sudo apt-key add /var/cuda-repo-wsl-ubuntu-11-6-local/7fa2af80.pub
sudo apt-get update
sudo apt-get -y install cuda-toolkit-11-6

2.1) Check, if Nvidia CUDA see videocard

nvidia-smi

  1. Install Pip3

sudo apt install python3-pip
pip3 install --upgrade pip

  1. Install TensorFlow

pip install tensorflow

  1. Install CudNN

sudo apt-get install zlib1g
sudo dpkg -i ./cudnn-local-repo-ubuntu2004-8.4.1.50_1.0-1_amd64.deb
sudo cp /var/cudnn-local-repo-ubuntu2004-8.4.1.50/cudnn-local-E3EC4A60-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get install libcudnn8 libcudnn8-dev

4.1) Check TensorFlow:
a) Check CPU:

python3 -c “import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))”

If tesor return, everything is working properly.

б) Check GPU:

python3 -c “import tensorflow as tf; print(tf.config.list_physical_devices(‘GPU’))”

If tesor return, everything is working properly.

3 Likes

It seems that you are missing a Path.
Everytime I use CUDA first I have to excute

export PATH=/usr/local/cuda/bin:$PATH

That is ok because also I have HPC installed, to use HPC instead of CUDA Tools

export PATH=/opt/nvidia/hpc_sdk/Linux_x86_64/21.3/compilers/bin:$PATH
export PATH=/opt/nvidia//hpc_sdk/Linux_x86_64/21.3/cuda/11.2/bin:$PATH

For any CUDA stuff fist you have to define path to your CUDA installation

That is normal. For, Numba, Pycuda, Pyopencl, cupy and I think the same for
tensorflow

1 Like

By the way

1- deviceQuery
2- nvidia-smi
3- numba -s

works without defining PATH, But to really compile anything you should
export the correct PATH to your installation, first and for most.

so export the correct path

export PATH=/usr/local/cuda/bin:$PATH

then try

nvcc -V

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Mon_Oct_12_20:09:46_PDT_2020
Cuda compilation tools, release 11.1, V11.1.105
Build cuda_11.1.TC455_06.29190527_0

also in my case after

export PATH=/opt/nvidia/hpc_sdk/Linux_x86_64/21.3/compilers/bin:$PATH
export PATH=/opt/nvidia//hpc_sdk/Linux_x86_64/21.3/cuda/11.2/bin:$PATH

then

nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Jan_28_19:32:09_PST_2021
Cuda compilation tools, release 11.2, V11.2.142
Build cuda_11.2.r11.2/compiler.29558016_0

As you can see I have two different CUDA installed with WSL and I commute between them

Good Luck

Also be sure that your tensorflow is compatible with your CUDA.
That is why I didn’t intall the last CUDA just 11.1 and 11.2
to be sure they work almost with everything unless you really need something
new not implemented in earlier version.

Nvidia Guys: include the PATH story to your manual

1 Like

Good afternoon. Thanks for the detailed answer. Based on your experience, I need to install an earlier version of CUDA. I understand correctly that it is also necessary to install an earlier version of the nvidia studio drivers?

No, NO, No, Leave the drivers they are installed from windows and now are included in microsoft update.
DRIVER IS NOT THE PROBLEM WITH WSL . NVDIA DRIVERS ARE INCLUDED
in WINDOWS UPDATE
this question already answered many times here.

ALL you need to follow the manual to install CUDA TOOLS KIT

I installed this more than one year ago, the important thing DON’T install
Linux Drivers they will not work and shadow the correct windows drivers.

Don’t Install DRIVERS, Your Correct question should be
How to install nvcc? Nvidia compiler
The Normal Way in the manual DON’T INSTALL ANY LINUX drivers

then once you installed nvcc check it with
nvcc -V
if it works then everything shall be ok otherwise never any CUDA will
compile.

IF you need more nvc++, THEN INSTALLL HPC WITHOUT ANY LINUX DRIVERS
Good Luck

Was this ever resolved? I also get the same errors. Using latest drivers and CUDA 11.7. TF2 appears to be working fine with GPU accel, but getting that annoying warning.

Any news about those annoying warnings ? Also using CUDA 11.7 and TF2 …

To remove those “warnings” (in fact specific info messages from tensorflow), add the following 2 lines before importing tensorflow.

import os
os.environ[‘TF_CPP_MIN_LOG_LEVEL’] = ‘3’ # or any {‘0’, ‘1’, ‘2’}

(from python - Is there a way to suppress the messages TensorFlow prints? - Stack Overflow)

1 Like

Hello, sorry for the delay. I’m still dealing with the problem. I will post here as soon as there is news.

A new problem:

Get:1 file:/var/cuda-repo-ubuntu2004-11-7-local  InRelease [1575 B]
Get:1 file:/var/cuda-repo-ubuntu2004-11-7-local  InRelease [1575 B]
Err:1 file:/var/cuda-repo-ubuntu2004-11-7-local  InRelease
  The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 5AAE466D15CCF53C
Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
Hit:3 http://archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:4 http://security.ubuntu.com/ubuntu focal-security InRelease
Hit:5 https://download.docker.com/linux/ubuntu focal InRelease
Hit:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease
Hit:7 https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64  InRelease
Reading package lists... Done
W: GPG error: file:/var/cuda-repo-ubuntu2004-11-7-local  InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 5AAE466D15CCF53C
E: The repository 'file:/var/cuda-repo-ubuntu2004-11-7-local  InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

I use WSL2 Ubuntu-20.04 Kernel version: 5.10.102.1-microsoft-standard-WSL2

And another symptom of the problem:

$ sudo docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].
ERRO[0000] error waiting for container: context canceled

Hello again. The problem is that the LINUX WSL 2 kernel is built without NUMA support. You can check this with the following command sequence:

# Install the numactl package
$ sudo apt-get install numactl
# Check for NUMA support
$ numactl --show

On my system I got the following output:

physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
No NUMA support available on this system.
1 Like

I was able to set everything up. Everything works fine. I used CUDA Toolkit v11.6.2. I installed it using this instructions.

1 Like

I tried following this steps from the link you provided, which is similar to comment. But, I still get the warnings. Did you do anything specifically different?

2 Likes