I am presently operating the equipment characterised above under Keras for R with tensorflow-cpu 2.0.0 on RStudio with R3.6.2. This corresponds to TF2.0 in Python, but uses reticulate 1.14 as interface to keras 2.2.5 in R. I’m doing DL developmental work with up to 6 convolutional (small window size: <=4x4) and 1…3 LSTM layers and rather compact data (<10.000 points, <10 dimensions).
Q: Which NVIDIA SW do I need to operate my GPU with tensorflow-gpu 2.0.0?
Your advice would be highly appreciated.
You will need a 418.x or later CUDA driver, cuda 10.1 toolkit, and cudnn 7.6.
Windows setup guide is here: https://www.tensorflow.org/install/gpu#windows_setup
Tnx a lot for your prompt an concise reply - very helpful indeed. I found toolkit 10.1, cudnn 7.6 and all release notes + installation guides, but - strange enough - not the 418.x CUDA driver (I only found type 418.81 release notes and a blog article on type 418-drivers, https://news.developer.nvidia.com/unleash-the-power-of-turing-with-nvidia-driver-418/ ). Do you know a download link for this driver?
The manual search under Offizielle Treiber | NVIDIA only delivers Studio Driver & GRD-Driver, both starting with type-no.s 442.xxx.
442 > 418, so that driver will work.
Oh - great. I thought “418” is a must, only “.x” could be anything. Now, 442>418 makes things easy.
Thank you very much for your support!
Just a short feedback: got everything to work, speed with my DL-network has increased by about a factor 4 as compared with my initial equipment. Installation of cuDNN was not really straightforward, though: I had to apply exactly the same tricks as suggested in https://stackoverflow.com/a/59197914/12687841, since the file cudart64_100.dll was not in the path where it should be and cublas64_100.dll was lacking completely.
The funny thing, however, is the warning CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2. (My CPU is a i7-9750H)
Is there a possibility to get that working, too? I am operating Keras for R under RStudio, so I would need some fix to be applicable in this environment.
In any case - thank you very much for your concise and helpful support, nluehr - I really appreciate!
Glad you got TF working with your GPU.
The publicly distributed TF binaries must support a range of CPUs. This means they cannot take advantage of some instructions that are available only on the most recent CPU architectures. If you want to use those instructions, you would need to build your own TF binaries from source. Given that you are using a GPU for your floating-point intensive operations, it is unlikely that the addition of AVX2 instructions would yield a significant performance benefit.
Yeah, I understand that - thanks for your reply.
More than one year of quite successful work with the above equipment (WIN10Pro x64, 7i-9750H & GeForce 2060 RTX +
CUDA10.1) has passed since my last conversation.
Now I learn that I would need
CUDA10.2 instead of
10.1 in order to install
Torch for R on my notebook, raising the following questions:
Is it possible to update the existing installation to that level at all?
Where do I find the necessary sw files + installation guide? (I noticed that
CUDA11.3 is the recommended version, but
Torch for R 0.3.0.9001, the most recent version, requires
10.2 at the moment and I didn’t find anything of that level).
I was told that
CUDA10.2 may be installed in parallel. This would be a useful configuration since I could keep the working version
CUDA10.1 as a standard for
Tensorflow, while experimenting with
CUDA10.2 to get
Torch for R working; if that succeeds, I could try
Tensorflow, too. Thus: How should I proceed to get these two installations in parallel?
Any useful hint would be highly appreciated - tnx in advance!