Hi guys,

I’m having some problems trying to figure out how to do this (or if it is possible):

At the moment I’m working on a molecular dynamics project that uses a 10 * C array where C defines the number of boxes in a 3D grid and 10 is the maximum number of molecules in a box. The problem arises when I try to allocate memory for this array because it can become huge. If i decrease the size of C then the maximum number of molecules in every box increases and the array size is almost the same.

I think I can use this:

Instead of having:

int* array = new int [10*C];

I can use:

int** array = new int [C];

array[0] = new int [N1]; N1<10;

array[1] = new int [N2]; N2<10;

array[3] = new int [N3]; N3<10;

array[7] = new int [N4]; N4<10;

array[8] = new int [N5]; N5<10;

.

.

.

array[C-1] = new int [Ni]; Ni<10;

where (N1 + N2 + N3 + N4 + N5 + … + Ni) == Number of molecules

By doing this I only allocate the memory space I’m going to use instead of allocating the maximum memory I might use. Besides, since the number of molecules in the system is fixed, the maximum size of this array would be C + N.

Now, the only problem is that I don’t know how to allocate this kind of array in device memory, I can’t use cudaMallocPitch because it allocates an array of fixed width*height size which is the main problem.

any advice???

Thanks guys for your time and help