Mixing C++ and CUDA

Hello, I know this probably has been asked before, but I really am unable to find it.
I can use a externally created kernel from inside a C++ Class with no problem, what I would like though, is to create a kernel inside a class. Is this even possible?

Thanks for the help.

By “creating a kernel inside a class” you perhaps actually mean defining a global function as a member function of a class, something like

class Foo_Class
    __global__ void Kernel_Function() { //implementation ...}

With the above code, you will probably receive the following compilation error: invalid combination of memory qualifiers. The reason is that global functions are called by the host via the “CUDA syntax”


and are executed on the device. So, the answer to your question is “no”.

Opposite to that, you could have kernel launches of the form above within a class member function definition.

I would say (with respect) that the title of your post is a bit misleading. Something like “defining a global function as a class member” would have been more explanatory.

JFSebastian, that was exactly what I was looking for (sorry for the misleading post).

And what about global accessing a class? Is it possible? Or even worth it?


Misread problem sorry

Actually sBc-Random, JFSebastian answered my question, another question was if it’s possible (even worth it) to use a class inside a CUDA Kernel.

If by “use a class inside a CUDA Kernel” you mean to pass objects to kernels or using class data or class member functions, the answer is “yes”.

Have a look at the NVIDIA CUDA C Programming Guide, Appendix D, C/C++ Language Support.

When defining the class, constructors and function member definitions should use the device keyword. Obviously, data should reside on the device. Remember that, to pass a class object to a global function you should implement a copy constructor in the class definition.