CUDA support for Visual Studio 2017

Does current version of CUDA toolkit support Visual Studio 2017?
If no by when can we expect it to support

(1) No, CUDA 8.0 does not have support for Visual Studio 2017. See Windows Installation Guide (

(2) Only NVIDIA knows the answer to that. Historically they have not commented on unreleased (future) CUDA versions, so it seems unlikely they will provide any definite statements in this case.


I found a way to build CUDA projects under VS 2017:

Download cuda 9.0 from here:
It works for me.


CUDA 8.0 only supports Visual studio 2015

CUDA 8.0 supports multiple version versions of MSVS, from MSVS 2010 (which is what I am using) through MSVS 2015. It does not support MSVS 2017. For supported versions, see the Windows Installation Guide (

This is not good. On Windows Tensorflow does not currently support cuda 9. So we have to downgrade to Visual studio 2015 to run tensorflow gpu on Windows? That is annoying.

You might want to inquire with the Tensorflow folks when they will add support for CUDA 9. The final version of CUDA 9 has been shipping for a while, and the developers had access to the release candidate.

It will not support cuda 9 on windows until version 1.5 is released. I just bought a new computer and I am simply trying run tensorflow-gpu. I have gone through this before but I can’t put it all together this time. Very frustrated. I installed VS 2015 but still do not think cuda 8 is installed properly as the samples folder is not present and I cant run any tests. Tensorflow then fails to install. You cant really expect everyone to imediately release a new version of their software just because you release a new version. You should still support cuda 8.

edit: Ok I just resolved it. I had to uninstall anaconda and reinstall. It seems VS and Cuda need to be installed before anaconda.

CUDA 8 is still available for download and installation. I downloaded and installed the latest version of it (8.0.61) fairly recently (due to various constraints and dependencies I am unable to upgrade to CUDA 9 at this time). I am running on Windows 7. Installation was smooth and I have not encountered any issues so far. I am running with the latest drivers. I am unable to make a remote diagnosis as to why your installation may be inoperable.

Exactly. But strangely enough, a certain percentage of CUDA programmers starts complaining if a new version of MSVS ships and it is not immediately supported by CUDA. I figured it would only be fair to extend the same approach to the Tensorflow vendor (Google) :-) Maybe I have a quirky sense of humor.

Analogous situation to CUDA, which added (preliminary) support for MSVC 2017 in the next version, namely CUDA 9. That is in general what happens in a software stack: If the software lower in the stack gets updated, support for that in the next higher layer of the software stack gets added in the next version, not the version that has been out for year and is done and dusted.

I understand. I just wish everyone would try to better mitigate these situations. It would be nice if it were fairly easy to add support to the higher layer software that is a little older. This seems to be a common problem with dependencies in Windows.

Cuda 8 is not done and dusted on Windows since Tensorflow cannot support cuda 9 on Windows. A lot of people using cuda on Windows will also be using tensorflow.

CUDA 8 is “done and dusted” in the sense that it has been in its final form since February of 2017 (or thereabouts) and the code base is frozen because the developer resources have since been moved to CUDA 9 and the next future version of CUDA.

In summary: CUDA 8 is available for download, it works just fine with multiple MSVS version from MSVS 2010 through MSVS 2015, there are (to my knowledge) no critical bugs in it that need to be addressed, and you can use it to run Tensorflow on Windows, today. What more do you need / want?

Well it would be nice if CUDA 8 was able to recognize VS 2017 is installed. Currently I have no problem with VS 2015 but in the past I have had problems and had to upgrade to 2017 (on another system). I think some people are able to use cuda 8 with VS 2017 but it was not finding what it needed on my system so I had to downgrade. My point is that if a large group of people have to keep using a certain version of CUDA then it should not be considered done and dusted. It would be nice if the package was written in such a way that making such updates to both the new version and the old version is not a big deal. I am not a software engineer so I really have no idea how hard this is to do.

There is an old song by the Rolling Stones which addresses this scenario: You can’t always get what you want.

For better and worse, CUDA requires tight integration with various components of the host toolchain (header files in particular). This is needed to ensure the correct functioning of host device functions (this is identical code that can be built for both host and device), for example. As a consequence, adjusting CUDA to changes in the host toolchain is non-trivial and requires developer resources.

There are competing models of host toolchain integration, e.g. OpenCL, which do not result in such tight coupling. But as a trade-off, they cannot offer some features that CUDA users find useful.

As explained earlier, when there are dependencies in a software stack, there is typically a lag from where a lower-level component gets updated to where the next higher level up is adjusted to these changes. This problem can be exacerbated when different development models are used by the development teams producing the various components that make up the software stack. That’s why software packages typically state their exact prerequisites. CUDA does it, Tensorflow does it. A free “mix and match” of software components is not possible unless interfaces are frozen, standardized, validated, and enforced. Which is counterproductive in fields that are still in rapid development, like parallel computing and deep learning.

You might want to tell that to the people who maintain Tensorflow. Why can’t the current Tensorflow just work with CUDA 9?

I understand your points and they are valid. It does seem that things are much easier to integrate on Linux than Windows. If whatever makes that integration simpler on Linux could be applied to Windows that would be great.

The issue with Tensorflow not supporting CUDA 9 seems to be the fact that Tensorflow is developed primarily for Linux and NVIDIA develops primarily for Windows. Thus the resources are spent on each operating system. I feel like everybody would be better off if software companies that develop software that are often coupled together would communicate and figure out ways of development that the upgrade will not break the others software. For instance if CUDA 9 could have been written so that it could have with Tensorflow 1.3 just as CUDA 8 could be used with 1.3.

On another note I think it is strange that NVIDIA driver support for Linux has historically been terrible. Most people who use NVIDIA for deep learning are on Linux. This is NVIDIA’s biggest mistakes in my opinion. Linus Torvalds has been frustrated with this as well. Check out this 2 minute youtube on the issue:

I don’t hold religious views about Linux/Unix vs Windows, having spent about equal amounts of time developing on both. However, in general development on Windows incurs what I call the “Windows tax”, i.e. additional development time/cost, which in my experience is somewhere in the 20% to 30% range. Your mileage may vary.

The host-toolchain integration issues with CUDA apply to Linux in just the same way they apply to Windows. With every new gcc version that ships there is a chorus of complaints why CUDA does not integrate with those brand-new toolchains yet. And I provide much the same answers to that as I have provided here with respect to MSVS.

I tend to ignore the personal opinions of angry alpha males like Mr Torvalds. People getting angry for not getting their way is behavior that I consider appropriate for kindergarten, it is not conducive to solving technical issues. While no product is flawless, the feedback I have seen over many years is that NVIDIA’s proprietary drivers for Linux are as good and better than those of competing vendors. And at least some issues that arise when using these drivers is due to obstacles people in the Linux world purposefully created for religious reasons.

Linux has many features that are useful in high-performance computing, which is why it has a high rate of adoption in that field. The makers of Tensorflow are free to support any platform they like and find useful, and if that is predominantly Linux, that is a perfectly valid choice for them to make. And it is smart business for NVIDIA to support important middleware frameworks on whatever platforms the middleware vendors chose.

As long as companies operate on a for-profit basis and therefore are in competition, they will co-operate only when and where it makes economic sense for all participants. Even if you change the analysis from the corporate world to the academic world, I think you will find much more competition and much less cooperation than one might naively envision. At least that is my personal impression.

CUDA’s job is to grow the market for NVIDIA’s GPUs, and its evolution happens accordingly. Selling GPUs is what pays the bills at NVIDIA and a significant portion of that is remuneration for software engineers. I am not aware of any specific backward compatibility issues with CUDA 9, so you would have to ask the Tensorflow developer why their product cannot support CUDA 9. Is it even for any technical reasons? I don’t know.

It can. However it requires that it be built from sources. The current linkage with CUDA 8 is for people who are trying to install binary components. At some point, I expect, the TF maintainers will have a new/future version of TF with binary support for CUDA 9. To be clear, I’m not suggesting that build from sources is easy, trivial, or that I know how to do it on all platforms. But it is possible, without a doubt. It may be harder on windows than on linux. And as has already been stated a few times, there are a lot of interdependent software components in a full stack that involves TF, so the process of building from sources may have many dependencies.

This configurational difficulty for a modern DL software stack has been around for awhile, and is not disappearing rapidly at this point in time. If anything, it may be getting worse. NVIDIA have developed technologies like NGC in part as an attempt to address the desires of people who need/want the latest and greatest, with a minimum of fuss. You can download a NGC container that has very recent TF (1.4+) and CUDA 9. Nothing to compile, or install (beyond what is needed to support a nvidia-docker container). No, it is not available (yet) on windows.

So this entire issue boils down to the TensorFlow folks at Google not producing a pre-built binary for CUDA 9, which presumably would only take them hours to produce (being very familiar with their own build system)? If my understanding is correct, this completely clears up which tree Tensorflow users should bark (hint: it’s not NVIDIA).