Hi,
I am facing some inconveniences when trying to reuse some methods, which I implemented in the past, in kernels. For instance, imagine I have the following struct Integer:
[codebox]struct Integer
{
int m_int;
};[/codebox]
and the following method which squares the Integer object:
[codebox]void squareInteger(Integer& integer)
{
integer.m_int *= integer.m_int;
}[/codebox]
Now I want to reuse these codes (imagine that the real code is a big big library with lot of classes/structures and methods) in the kernels. So, I want to implement a kernel which squares each Integer object in a array:
[codebox]global void squareIntegerArray(Integer* array, unsigned int size)
{
unsigned int i = threadIdx.x;
if(i < size) squareInteger(array[i]);
}[/codebox]
The problem is that the method squareInteger called in the kernel squareIntegerArray was not defined as device. Therefore, I cannot reuse the methods previously implemented in the kernels.
-
I would like to know how I can overcome this problem?
-
Is there any way to wrapper squareInteger in a device method (say, squareInteger_device), so I can use it in the kernel?
-
How C/C++ libraries can be reused in CUDA code, since their methods should have been specified device in order to run in the kernel?