Is it possible to use cudnn outside of docker in WSL?

Is it possible to use CuDNN outside of Docker on WSL2? I tried installing libcudnn on ubuntu wsl, but it isn’t appearing in the usual place: /usr/local/cuda

Yes it is possible, with WSL support, we don’t restrict CUDA workload to docker. You can do CUDA development or run CUDA apps without docker even the ones using cuDNN. Some extra libraries like nvml are not yet supported but as long as you don’t rely on them it should work (see the list of know limitations).

To answer your specific question, I’ll need to know how did you install cuDNN on your WSL environment?

I tried installing as follows:

sudo apt install ./nvidia-machine-learning-repo-ubuntu1804_1.0.0-1_amd64.deb
sudo apt-get update
sudo apt-get install --no-install-recommends \
    libcudnn7=7.6.4.38-1+cuda10.1  \  
    libcudnn7-dev=7.6.4.38-1+cuda10.1

hi there,

cuDNN libraries are not installed under /usr/local/cuda typically. They are installed in the standard library locations for Linux distributions. For Ubuntu, this will be under /usr/lib/x86_64-linux-gnu/

Hi @P_Ramarao,

Could you please provide the steps to install cudnn on wsl2. It would be very helpful for me, thanks!

Regards,
Sri Yogesh.