I just picked up a 1070ti and I’d like to do some Python ML development on my new computer/GPU. I’m running into problems getting Python to recognize the GPU. I’ve spent this evening trying to install the CUDA Toolkit 9.2 and it keeps failing. I’ve installed Visual Studio Community Edition and I have Anaconda Python installed. Is there a document that lists the proper installation steps to get all this working? I’m not getting any errors in the CUDA Toolkit install process, only that its failing.
Do I really need the 1.5GB CUDA Toolkit installed along with Visual Studio just to write some Python ML code?
Any help will be greatly appreciated.
OS: Windows 10 Version 10.0.17134.228
CPU: i7 8700K
SSD: 256GB WD
If the Python libraries utilize CUDA then, yes, you need the toolkit installed.
Why are you going to use v9.2 instead of v10.1?
Hi everyone. I have a system having GeForce RTX 2080 Ti (11GB). As per official NVIDIA site, I should use CUDA toolkit 10 as it for the turing architecture and my graphic is also of turing architecture. However, I want to dump all my trained models in the Jetson TX2 de. kit having CUDA 9 for practical implementation. So, I have two options to remove the compatibility problems for my development.
Either I have to cuda 9 in my system having RTX 2080 Ti or I have to upgrade cuda version of jetson Tx2 to CUDA 10. Can anyone suggest which one is good with respect to using best of the both the platforms?
Another question, can I install and use CUDA 9 toolkit in NVIDIA GeForce RTX 2080 Ti. If yes, perhaps I am under utilizing the performance of the card. This because cuda 9 is for Volta GPU architecture.
Kindly anyone help me to come out of this confusion?
Please don’t pick a random thread to ask an unrelated question.
If you are on windows, that is not a supported environment for Jetson cross-development anyway. If you are not doing cross-development, there is no particular reason that your desktop environment needs to match your Jetson environment.