I install Nvidia Windows Driver and CUDA according to this article.
After the installation of Nvidia Windows Driver, I’ve checked CUDA version by running “/usr/lib/wsl/lib/nvidia-smi”:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 510.00 Driver Version: 510.06 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
Then I installed CUDA Toolkit 11.3 according to this this aticle. After this , I checked the CUDA Toolkit version by running “/usr/local/cuda/bin/nvcc --version” and got:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Mon_May__3_19:15:13_PDT_2021
Cuda compilation tools, release 11.3, V11.3.109
Build cuda_11.3.r11.3/compiler.29920130_0
As you can see, versions output by nvidia-smi and nvcc are different. Then I install Pytorch through pip:
pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
Then verify the installation of torch like this:
import torch
x = torch.rand(5, 3)
print(x)
and this:
import torch
torch.cuda.is_available()
Until now, everything goes well. However, when I train a network and call the backward() method of loss, torch throws a runtime error like this:
Traceback (most recent call last):
File "train.py", line 118, in train_loop
loss.backward()
File "/myvenv/lib/python3.6/site-packages/torch/_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/myvenv/lib/python3.6/site-packages/torch/autograd/__init__.py", line 156, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: CUDA error: unknown error
I successfully run the same project on other machine so I’m wondering if the version difference of CUDA driver and CUDA toolkit leads to these errors. Any suggestions?