I am looking for a way to run CUDA device codes from a C++ host code.
The 0.8 release notes say “…only the C subset of C++ is supported” - bad news. On the other hand, the NVCC_0.8.pdf claims “…source files for CUDA applications consist of a mixture of conventional C++ host code, plus GPU device functions.”.
Has anybody an idea how one can do it? Thanks!
It can certainly be done. What specifically are you trying to do?
I segregate all my CUDA code into a separate .cu file and compile it with nvcc to a shared library. (Note that it has been mentioned in other topics that loading CUDA code from a shared library does not work on Windows yet.) The .cu file includes host functions that provide my “public” interface which I also declare in a .h file. Then my C++ code includes the .h file and is able to call the host functions I declared. All the CUDA specific code stays in the .cu file, so there is no problem compiling my C++ code with g++.
Thanks for your answers seibert and Jared. I plan to use a C++ image processing toolbox together with CUDA. Jared, your approach sounds clear and reasonable. I have to check how far I can get with it. Especially on has (of course) to ensure that C++ code must be free of any CUDA specific host code/types that can e.g. hold states between calls to the shared library.
You can keep float2s, float3s, etc. in your C++ code if you make sure they’re properly defined (they’re just structs of floats).