Is it possible to use CuDNN outside of Docker on WSL2? I tried installing libcudnn on ubuntu wsl, but it isn’t appearing in the usual place: /usr/local/cuda
Yes it is possible, with WSL support, we don’t restrict CUDA workload to docker. You can do CUDA development or run CUDA apps without docker even the ones using cuDNN. Some extra libraries like nvml are not yet supported but as long as you don’t rely on them it should work (see the list of know limitations).
To answer your specific question, I’ll need to know how did you install cuDNN on your WSL environment?
I tried installing as follows:
sudo apt install ./nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb sudo apt-get update sudo apt-get install --no-install-recommends \ libcudnn7=18.104.22.168-1+cuda10.1 \ libcudnn7-dev=22.214.171.124-1+cuda10.1
cuDNN libraries are not installed under
/usr/local/cuda typically. They are installed in the standard library locations for Linux distributions. For Ubuntu, this will be under
Could you please provide the steps to install cudnn on wsl2. It would be very helpful for me, thanks!