I’m looking for some advice on how to structure a separately compiled C++ and CUDA C program. At the moment I have a class in C++ that uses different libraries, e.g. boost, etc. At the same time I have pointers to pointers in the class as variables that I use to keep track of memory for that class that I have stored on the GPU.
class Example
{
float **a1, **a2;
}
void func_on_gpu(Example e)
{
cudaMalloc((void*)e->a1, sizeof(float)*1000);
}
Is there any standard way of programming with separate compilation? Should I have two separate classes, one in CUDA C that keeps track of CUDA memory and another host class that acts as a sort of wrapper over the GPU class? It seems like there should be a better way than having to keep track of all these pointers to pointers. The CUDA C functions at the moment are not connected to a class.
Ideally I would want a way where I could have both C++ functions that use common C++ libraries as well as CUDA C functions under the same class.
A basic explanation and also examples would be great.