Installation on WSL2/Windows 11 problem - can't see GPU

Hi there!

I have installed WSL2 - using Microsoft Store some time ago.
Here is what I have:

fire$ uname -m && cat /etc/*release
x86_64
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04.1 LTS"
PRETTY_NAME="Ubuntu 22.04.1 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.1 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

Now I need to install TensorFlow and train some neural networks.

By using this guide (CUDA Installation Guide for Linux) I have tried to do it, but unfortunately, i didn’t succeed.

I have simplest Geforce RTX 3050 Ti (Laptop).

On Windows, I have driver: 527.56.

When I am trying to find Nvidia under wsl, I can’t see that there is appropriate driver, see the lspci output:

$ lspci
1e5b:00:00.0 3D controller: Microsoft Corporation Basic Render Driver
34c3:00:00.0 System peripheral: Red Hat, Inc. Virtio file system (rev 01)
4034:00:00.0 SCSI storage controller: Red Hat, Inc. Virtio filesystem (rev 01)
54e8:00:00.0 3D controller: Microsoft Corporation Basic Render Driver
6691:00:00.0 SCSI storage controller: Red Hat, Inc. Virtio filesystem (rev 01)
8cad:00:00.0 SCSI storage controller: Red Hat, Inc. Virtio filesystem (rev 01)
fe48:00:00.0 SCSI storage controller: Red Hat, Inc. Virtio console (rev 01)

One of the 3d-controllers is Nvidia… But I don’t see it. is it a problem?

But when I tried to use Tensorflow simple test, it returned the info that I have not some libraries that CUDA requires…

for simple python script:

print(tf.reduce_sum(tf.random.normal([1000, 1000])))
print(tf.config.list_physical_devices('GPU'))

See the output:

2022-12-25 20:28:49.080528: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-12-25 20:28:49.429094: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2022-12-25 20:28:50.386852: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2022-12-25 20:28:50.387255: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2022-12-25 20:28:50.387342: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation.
WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation.
WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation.
WARNING:root:Limited tf.summary API due to missing TensorBoard installation.
WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation.
WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation.
WARNING:root:Limited tf.compat.v2.summary API due to missing TensorBoard installation.
2022-12-25 20:28:52.964517: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:967] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2022-12-25 20:28:53.297342: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1934] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
tf.Tensor(-43.878326, shape=(), dtype=float32)
[]

How to check that CUDA sees the GPU or works correct? Or how to check that TensorFlow sees CUDA?

The result of command:

nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Nov_18_09:45:30_PST_2021
Cuda compilation tools, release 11.5, V11.5.119
Build cuda_11.5.r11.5/compiler.30672275_0

Sorry if I am asking simple things - it’s the first time I am trying to use Tensorflow and it’s quite unsimple to configure all things required.
Thanks for any help.

1 Like

Hi guys! Happy New Year!

Any suggestions? I have no idea how to solve that issue…
Why I can’t see nvidia gpu when I use the lspci command?

 lspci
2266:00:00.0 SCSI storage controller: Red Hat, Inc. Virtio console (rev 01)
51b8:00:00.0 3D controller: Microsoft Corporation Basic Render Driver
6234:00:00.0 SCSI storage controller: Red Hat, Inc. Virtio filesystem (rev 01)
75a4:00:00.0 3D controller: Microsoft Corporation Basic Render Driver
b651:00:00.0 SCSI storage controller: Red Hat, Inc. Virtio filesystem (rev 01)
c23e:00:00.0 SCSI storage controller: Red Hat, Inc. Virtio filesystem (rev 01)
dee2:00:00.0 System peripheral: Red Hat, Inc. Virtio file system (rev 01)

And how I can fix it?

Hi,

With Windows11 + Nvidia RTX 2080Ti + Nvidia Driver 527.56 + Ubuntu 22.04 + WSL Ubuntu Kernel 5.15.79.1 + conda-forge (cudatoolkit=11.2.2, cudnn=8.1.0.77) and pip(tensorflow=2.10.0), I check if my tensorflow uses GPU, by typing python -c "import tensorflow as tf; print (tf.config.list_physical_devices('GPU'))".

2023-01-05 16:26:51.323974: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F AVX512_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-01-05 16:26:51.486984: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-01-05 16:26:51.523486: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-01-05 16:26:52.197457: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /lib64:/lib64::/root/miniconda3/envs/tensorflow2.10/lib
2023-01-05 16:26:52.203288: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /lib64:/lib64::/root/miniconda3/envs/tensorflow2.10/lib
2023-01-05 16:26:52.203329: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2023-01-05 16:26:52.796923: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:966] could not open file to read NUMA node: /sys/bus/pci/devices/0000:21:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-01-05 16:26:52.839049: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:966] could not open file to read NUMA node: /sys/bus/pci/devices/0000:21:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-01-05 16:26:52.839127: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:966] could not open file to read NUMA node: /sys/bus/pci/devices/0000:21:00.0/numa_node
Your kernel may have been built without NUMA support.
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

The missing library libnvinfer.so is a specific TensorRT file. TensorRT enables faster inference, but you can do without it.

Also the lspci command in WSL Ubuntu terminal will not show any information about Nvidia drivers. Just ensure that the nvidia-smi command runs successfully in the WSL Ubuntu terminal.

If not, you could also try installing cudatoolkit with these steps, although the conda process is simpler. Upon following that tutorial, it finally make(s) you run a program called “deviceQuery”. If that is successful, then you know that your GPU is being detected in WSL Ubuntu.

1 Like

Dear @prerakmody , thank you for your detailed reply.

How can I ensure that I have TensorRT installed?
I suppose I have problems with paths…
I have used pip list - and can’t see it in the virtual environment packages.
So I tried to use

pip install tensorrt

(.venv) (base) fire@note-4:~/py_projects/octane/calc_lpg_octane$ pip install tensorrt
Collecting tensorrt
  Downloading tensorrt-8.5.2.2-cp310-none-manylinux_2_17_x86_64.whl (549.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 549.2/549.2 MB 2.1 MB/s eta 0:00:00
Collecting nvidia-cudnn-cu11
  Downloading nvidia_cudnn_cu11-8.7.0.84-py3-none-manylinux1_x86_64.whl (728.5 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 728.5/728.5 MB 1.9 MB/s eta 0:00:00
Collecting nvidia-cublas-cu11
  Downloading nvidia_cublas_cu11-11.11.3.6-py3-none-manylinux1_x86_64.whl (417.9 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 417.9/417.9 MB 2.6 MB/s eta 0:00:00
Collecting nvidia-cuda-runtime-cu11
  Downloading nvidia_cuda_runtime_cu11-11.8.89-py3-none-manylinux1_x86_64.whl (875 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 875.6/875.6 kB 3.8 MB/s eta 0:00:00
Installing collected packages: nvidia-cuda-runtime-cu11, nvidia-cublas-cu11, nvidia-cudnn-cu11, tensorrt
Successfully installed nvidia-cublas-cu11-11.11.3.6 nvidia-cuda-runtime-cu11-11.8.89 nvidia-cudnn-cu11-8.7.0.84 tensorrt-8.5.2.2

but still the same error:

(.venv) (base) fire@note-4:~/py_projects/octane/calc_lpg_octane$ /home/fire/py_projects/octane/calc_lpg_octane/.venv/bin/python3.10 /home/fire/py_projects/octane/calc_lpg_octane/regression_keras.py
2023-01-06 18:22:59.040949: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-01-06 18:22:59.145258: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2023-01-06 18:22:59.640390: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘libnvinfer.so.7’; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2023-01-06 18:22:59.640484: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘libnvinfer_plugin.so.7’; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2023-01-06 18:22:59.640490: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.

What paths should I ‘rewrite’ or use to register that library?

Now i have (using pip list for the current environment):

nvidia-cublas-cu11 11.11.3.6
nvidia-cuda-runtime-cu11 11.8.89
nvidia-cudnn-cu11 8.7.0.84

tensorboard 2.11.0
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
tensorflow 2.11.0
tensorflow-estimator 2.11.0
tensorflow-io-gcs-filesystem 0.29.0
tensorrt 8.5.2.2

Here is the output of nvidia-smi

fire@note-4:~/py_projects/octane/calc_lpg_octane$ nvidia-smi
Fri Jan  6 16:37:01 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 527.92.01    Driver Version: 528.02       CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:01:00.0 Off |                  N/A |
| N/A   51C    P8     4W /  40W |    659MiB /  4096MiB |      7%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A        25      G   /Xwayland                       N/A      |
+-----------------------------------------------------------------------------+

Another thing need to mention that I think the reason is in some paths…

I have done all the steps in mentioned manual and succeeded - I have received ./DeviceQuery app and ./bandwidthtest and it gave me a CUDA capable GPU.

(base) fire@note-4:~/py_projects/cuda-samples/Samples/1_Utilities/bandwidthTest$ ./bandwidthTest
[CUDA Bandwidth Test] - Starting…
Running on…

Device 0: NVIDIA GeForce RTX 3050 Ti Laptop GPU
Quick Mode

Host to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 10.1

Device to Host Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 9.9

Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 159.7

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

But - I can’t see the GPU in the list using:

(.venv) (base) fire@note-4:~/py_projects/octane/calc_lpg_octane$ python -c “import tensorflow as tf; print (tf.config.list_physical_devices(‘GPU’))”
2023-01-06 22:28:07.324068: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-01-06 22:28:07.621306: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2023-01-06 22:28:08.305227: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘libnvinfer.so.7’; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-12.0/lib64 :/usr/local/cuda-12.0/lib64
2023-01-06 22:28:08.305590: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘libnvinfer_plugin.so.7’; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-12.0/lib64 :/usr/local/cuda-12.0/lib64
2023-01-06 22:28:08.305614: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2023-01-06 22:28:09.335416: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:967] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-01-06 22:28:09.514083: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘libcudnn.so.8’; dlerror: libcudnn.so.8: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda-12.0/lib64 :/usr/local/cuda-12.0/lib64
2023-01-06 22:28:09.514165: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1934] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at Cài đặt TensorFlow với pip for how to download and setup the required libraries for your platform.
Skipping registering GPU devices…

Really don’t know what to do…

  1. Maybe there are some step-by-step manuals that worked for someone who use WSL2/Ubuntu under windows 11? :( I have used several ones and didn’t succeed to run tensorflow that can see the GPU. What can you suggest - just deinstall everything and try to do the same again?

  2. Maybe someone can suggest how to find where Tensorflow actually gets the list of GPU he can use?

Because as I understand - it just simply can’t see the GPU… When I make the applications from samples like devicequery - they successfully use GPU.

So then this does not seem to a WSL Ubuntu issue, since you are able to detect the device via the deviceQuery program. Maybe its a tensorflow install issue

Can you check if the cuda and cudnn versions you have correspond to the requirements of your tensorflow version by using the table here

For e.g. I have cudatoolkit=11.2.2, cudnn=8.1.0.77 for tensorflow=2.10.0. I installed the cuda/cudnn packages via the conda install command using the conda-forge channel and tensorflow using the pip command

Thanks for your help!

But how to check the cudatoolkit and cudnn versions correct?

I have used pip list:

(.venv) (base) fire@note-4:~/py_projects/octane/calc_lpg_octane$ pip list
Package Version



keras 2.11.0

nvidia-cublas-cu11 11.11.3.6
nvidia-cuda-runtime-cu11 11.8.89
nvidia-cudnn-cu11 8.7.0.84

tensorboard 2.11.0
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
tensorflow 2.11.0
tensorflow-estimator 2.11.0
tensorflow-io-gcs-filesystem 0.29.0
tensorrt 8.5.2.2

with CUDA - I have:

(base) fire@note-4:~/py_projects/octane/calc_lpg_octane$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Nov_18_09:45:30_PST_2021
Cuda compilation tools, release 11.5, V11.5.119
Build cuda_11.5.r11.5/compiler.30672275_0

Maybe I need to install versions that were tested:

Version Python version Compiler Build tools cuDNN CUDA
tensorflow-2.11.0 3.7-3.10 GCC 9.3.1 Bazel 5.3.0 8.1 11.2
?

I can try to reinstall all but need advice on what to do in case I want to install all required versions - using standard virtual environments (preferably without conda).

E.g. for CUDA - How to uninstall it the correct way and install the 11.2 version instead?

Or maybe you will suggest a sequence of actions that will better fit the requirements?

when I use conda for installation using :

conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0

and then just checking:

python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

I have an output like that:

(.venv) (base) fire@note-4:~/py_projects/octane/calc_lpg_octane$ python3 -c “import tensorflow as tf; print(tf.config.list_physical_devices(‘GPU’))”
2023-01-21 01:06:48.172748: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-01-21 01:06:48.364032: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2023-01-21 01:06:49.162307: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘libnvinfer.so.7’; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/home/fire/miniconda3/lib/:/home/fire/miniconda3/lib/:/home/fire/miniconda3/lib/
2023-01-21 01:06:49.162427: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘libnvinfer_plugin.so.7’; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/home/fire/miniconda3/lib/:/home/fire/miniconda3/lib/:/home/fire/miniconda3/lib/
2023-01-21 01:06:49.162435: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2023-01-21 01:06:50.178143: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:967] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-01-21 01:06:50.221561: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:967] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-01-21 01:06:50.221696: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:967] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
[PhysicalDevice(name=‘/physical_device:GPU:0’, device_type=‘GPU’)]

But when I am trying to run this code to check whether the GPU is used it still gives me :

import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices(‘GPU’)))

if tf.config.list_physical_devices('GPU'):
  print("TensorFlow **IS** using the GPU")
else:
  print("TensorFlow **IS NOT** using the GPU")

Num GPUs Available: 0
TensorFlow IS NOT using the GPU

Why can be like that ?
Something wrong with the virtual environments?

Any solution for this, I am facing similar issue in Windows 11 physical environment

2 Likes

I’m also having a similar problem

3 Likes

I found a solution and posted at python - cuML cannot find GPU in WSL2 - Stack Overflow Hopefully it helps people who comes here in future ☺️