noobie here. Virtual functions in Cuda 4.0

Hi all,

just a general question that would greatly help me understand how to deal with derived classes and methods, in order to port existing C++ code to Cuda.
According to the programming guide you’re not allowed to pass an object with virtual functions as an argument to a global kernel. that is quite limiting in my case.
is there a way around this without creating the objects inside the kernel?
and would anyone know if that restriction is temporary pending future compiler releases?

much appreciation for any help!

in that thread a member suggests a way of doing it that seems to work. is that the best way?

The reason it doesn’t work is because C++ objects have pointers to functions (code) embedded in them. When you

create the objects in a function that runs on the CPU, those pointers get set to CPU versions of those functions.

These CPU versions cannot (and probably will never be able to) be executed on the GPU; they are x86/ARM/etc, not

the GPU’s ISA.

This is one of the reasons why programming heterogeneous systems is hard. At this point you don’t have many options other

than to create those objects on the GPU.

edit: You can pass objects with functions to a kernel, you just can’t call those functions while on the GPU.

thanks Gregory.

yes i’ve been passing inherited objects to kernels and calling their functions without problem, but only if the functions are not virtual. if they are it doesnt work, which makes sense giving your explanation.
it just makes porting C++ code a little trickier, as in my case it’s all virtual functions.

if anyone has neat tricks to do that please do share.
Thanks again!