I have a reasonably large Deep Learning project in Python using Tensorflow and I am trying to get it to run in WSL2.
So, I installed WSL2, and Ubuntu 18.04. If I try to run a very simple tensorflow example, such as this:
import tensorflow as tf
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name=‘a’)
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name=‘b’)
c = tf.matmul(a, b)
with tf.Session() as sess:
It works perfectly. It can access my GPU and produce the output smoothly. If I change to “tf.device(’/CPU:0’)” it also runs, as expected.
Now, my issue is that the project I am trying to execute is quite large and it has parts that run in GPU and parts that run in CPU. In a normal linux environment I would do the following:
CUDA_VISIBLE_DEVICES=0 python3 myproject.py
And it would run my code in GPU, except for the parts that should run in CPU (i.e., that are nested within a “with tf.device(’/CPU:0’)” statement).
However, I tried using CUDA_VISIBLE_DEVICES and it does not make any difference under WSL2 - in my code, I try to run it and it always falls back to the CPU, instead of running anything on GPU. It seems strange, because the typical tensorflow behaviour would be to take control of all available GPUs, but it is not behaving this way. I believe this is due to the name of my GPU (’/device:XLA_GPU:0’, instead of the usual ‘/GPU:0’), but I am not sure.
So, my questions are the following:
1 - Can I rename my GPU to the traditional ‘/GPU:0’?
2 - Is there any alternative to using “CUDA_VISIBLE_DEVICES=0” ?
3 - Can I install nvidia-smi under WSL2? If so, which version should I install?