Variable array size within kernel?

Hi guys,

I have a code that needs to do this

Data: a 3-dimensional data

data[x-coordinate, y-coordinate, energy]

for time
for x=0 to x_max
for y=0 to y_max

 a = [ x, y, *] // create an array/vector using every energy there is at this point

My plan is parallelize according to pixels
at time =0
create x_max*y_max number of threads to run the kernel below

the kernel ( int Emax)
{
float a[Emax];

for (e=0 to emax)
{
a[e] = data[tx,ty,e];
}
}

The compiler gives me an error because obviously I cant do float a[Emax]. Is there a way I can dynamically allocate memory?

Thanks
Z

No replies?

Simply, can I create variable arrays within the CUDA kernel?

All dynamic memory allocation has to be initiated from the host-side. Global memory is allocated with cudaMalloc() and dynamic arrays in shared memory are controlled with a special parameter when you launch the kernel. (See the Programming Guide for more info on dynamic shared memory.)

Thanks seibert :)