question about memory allocation

Hello!

I’ve a little problem and I just can’t solve it or google the solution.

There are few structures I allocated on host memory:

struct a

{

 int a1;

 int a2;

};

struct b

{

 int b1;

 a* b2;

 b* prev;

};

struct d

{

 int *d1;

 b* d2

};

d* host;

And now I want to allocate these structures on device memory:

CUDA_SAFE_CALL(cudaMalloc((void**)&host,sizeof(d)));

CUDA_SAFE_CALL(cudaMalloc((void**)&host->d1,sizeof(int)*10));

It seems like it doesn’t work. So what is the appropriate way of allocating on device memory?

How can I dynamically allocate memory in functions with global and device qualifiers?

Thank you in advance.

Right now, the answers seem to be “don’t use linked lists, use arrays instead”.

You can’t dereference a GPU pointer on the CPU, like you do in your second cudaMalloc call as the pointer is not in memory that can be gotten at by the CPU. There is not a good way of doing this without an allocation call on the video card itself…

There is a way to allocate memory on the CPU side of things, with cudaMalloc. There is not currently a way to allocate memory in a global or device function without writing your own malloc that uses the memory allocated by cudaMalloc as a source-pool.

I’ve got an open thread of my own on this topic.