I am fairly new to both CUDA and C++ Visual Studio projects but I have managed to get everything working locally on my machine. But I am a bit stumped about how to best deploy and package a C++ CUDA enabled binary to another machine at which the CUDA Toolkit/SDK etc has not been installed. Surely, it should not be needed to install the CUDA SDK/Toolkit to a production server in order to get it to work there (library references, enviromental varieables etc.)?
So: Is there an easy way in Visual Studio to just get one deployable CUDA package or have all the referenced needed CUDA binaries/libraries to be copied to the output bin folder when building?
Since I don’t know how to do this, I think the best solution would be to manually deploy the needed references to a place of choice and configure the Visual Studio project to point to that directory instead. To me there seem to be two places in the VS project settings where this needs to be done:
- “CUDA C/C++” -> “Common” option, set “Additional Include Directories” to point to the equivalent of (copied files to production folder)
“C:\ProgramData\NVIDIA Corporation\NVIDIA GPU Computing SDK 4.0\CUDALibraries\common\inc”
“C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\include”
- “Linker” -> “General” option set “Additional Library Directories” to $(CudaToolkitLibDir);
The question I have is where the variable $(CudaToolkitLibDir) is defined. It does not seem to be an enviromental variable. I am guessing it is pointing to “C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\lib\x64” but where is this defined?