So I’m just getting into development with CUDA, and you’ll have to forgive me for the long-winded post, because I’m not sure what is relevant to my problem, or what exactly my problem is.
My system is rocking a GeForce 560TI (Yes, I know, my computer is ancient. But I love her all the same), and so the most recent driver compatible with my machine is 391.35, which, to my understanding limits me to CUDA version 9.1 or older. So, having installed VS 2017 (The latest version of VS compatable with 9.1) I ran into an issue where my _MSC_VER was just slightly out of bounds, because the latest version of VS 2017 wasn’t accounted for at the time 9.1 was being worked on (I’m assuming). So I modified the check in host_config.h from “#if _MSC_VER < 1600 || _MSC_VER > 1911” to “#if _MSC_VER < 1600 || _MSC_VER > 1916” (The most recent patch of VS 2017).
Now that the sample code runs without complaining about my development environment, I’m getting a bunch of “expression must have a constant value” errors coming from a file called “type_traits”.
I guess what I’m really asking is, there’s gotta be an easier way to do this, yeah? Is my hardware just too old for CUDA development?