cuDNN Windows 10, running 1/2 speed vs Ubuntu


I have a dual boot PC. The Windows 10 OS is on an SSD. The Ubuntu is on an HDD. I installed CUDA 9.0 and followed the instructions to install cuDNN and did not get an error message I copied the 3 cuDNN files into their indicated directories. I see CUDA in the Control Panel but I don’t see cuDNN. How do I know if it’s installed properly? The docs indicate that cuDNN increases the speed by 100%.

When I train a model in Ubuntu, it runs at twice the speed as it does in Windows. Windows has the SSD so I thought that would be faster.

I’m running Python 3.6, Keras 2.2, tensorflow 1.9, tensorflow-gpu 1.9. Below is the screen output when the script runs.



2018-08-15 11:39:05.011390: I T:\src\github\tensorflow\tensorflow\core\platform\] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2018-08-15 11:39:05.468282: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\] Found device 0 with properties: 
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:01:00.0
totalMemory: 8.00GiB freeMemory: 6.59GiB
2018-08-15 11:39:05.468924: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\] Adding visible gpu devices: 0
2018-08-15 11:39:06.571279: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-08-15 11:39:06.571602: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\]      0 
2018-08-15 11:39:06.571814: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\] 0:   N 
2018-08-15 11:39:06.572272: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6364 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
C:/Users/ML/Desktop/KovalCNN/ UserWarning: Update your Model call to the Keras 2 API: Model(inputs=Tensor("in..., outputs=Tensor("de...)


From the script log it shows that the device is found: GeForce GTX 1080
So I am assuming you are running tensorflow-gpu 1.9 ; which has been compiled with cuDNN? Are you using a docker image/container? or compiling yourself from source? If you are compiling yourself, you need to point it to the location where cuDNN has been installed.

Is it possible to re-install cuDNN in your Win10? instead of copy-pasting from Ubuntu. You should be able to locate “cudnn64_7.dll” file on Windows.

Thanks for lookin at my issue.

1 - I have tensorflow-gpu 1.9 installed in the pycharm IDE I’m using

2 - I followed the installation steps and installed CUDA 9 and cuDNN from the Nvidia Deep Learning SDK Documentation page. I installed on the Windows 10 partition from this page, not by copying from Ubuntu.

3 - I copied all the cudNN files into the toolkit directories as indicated in the documenation. I also checked to make sure the environment variables point to where the cuDNN is located.

How can I tell if the cuDNN libraries are installed properly and working? The difference in processing speed between Ubuntu and the Windows 10 environments leads me to believe my cuDNN installation isn’t working.