C++ classes in host code


I was hoping someone could explain to me a bit about using C++ with cuda host code. Section 4.2.5 of the manual states that full C++ code is supported for the host. So does that mean that we can have host code with classes that directly call such routines as cudaHostMalloc etc?

When I try generating C++ classes and compiling as a .cu file, it compiles fine, but when I try to use it in an external .cpp program, I get errors.

I notice that in some SDK projects such as particle, the cuda* routines have been called by straight C code routines, and these in turn have been called by C++ class methods. I thought only kernel code needed to be straight C.

Confused …

What errors are you getting? You need to #include “cuda_runtime_api.h” to get cudaMallocHost and similar functions. And any host functions in the .cu file needs to be declared extern “C” in the header so that C++ knows how to call them.

Compiling actual C++ code with nvcc requires a command line switch to nvcc (check the release notes: it is labeled an alpha feature).

I’m not sure if this is what you need or mean but here’s a Visual Studio project that contains CUDA testcode. The .CU file actually contains a C++ class and it calls the kernel. From this class an object is created in the code that is not compiled by nvcc.

(The settings work for x64bit Debug mode - you probably have to copy them over if you want to use something else, e.g. 32bit)
cudacpp.zip (2.76 KB)

Thank you both for your comments and example code. Yes, I needed to opt in to use C++ compilation. I don’t think there is an example of that in the SDK though…might be nice to have one example that truly uses the C++ mode.

Thanks again!

I guess as soon as the C++ stuff releases alpha status Nvidia will add an example project to the SDK.