In my object-oriented design in C++, I basically have three classes for controlling the GPU:
1: GPUWorker
This one contains
2: GPUStreamController (This one abstracts a cudaStream using the streaming API, one can send work to it which will be asynchronously launched)
3: GPUMemoryManager (Responsible for allocating, memcpy’ing and keeping track of data that is shared amongst the StreamControllers)
GPUWorker uses the GPUMemoryManager to look up pointers/offsets to the correct data and then invokes an available
GPUStreamController.
The problem: Ordinary device memory allocated with cudaMalloc can easily be managed by the MemoryManager and passed along as a pointer to
the StreamControllers. But what about constant memory?
The normal way to do it is to place a global array at file scope, use memCpyToSymbol and then directly use the global array in the kernel.
But what I’d originally planned was to separate the memory handling and kernel invocation into separate classes (and thus separate compilation units).
Could I use a C++ reference to “pass the name” of the constant memory from the GPUMemoryManager to the GPUStreamController class, something like:
[codebox]class GPUMemoryManager {
constant &getConstantArrayReference()
}[/codebox]
and then use this reference in the GPUStreamController?
(My gut tells me that all this isn’t possible, and that I’ll have to resort to a global header file for the constant memory)
Kind regards, Andreas