creating a linked list inside the GPU

Is it possible to create a linked list in the GPU memory?
i know it is possible to create the linked list on the cpu and copy it on the gpu.
but is it possible to create the LS on the GPU. ie add nods, delete nods, allocate memory dynamically?

Is it possible to create a linked list in the GPU memory?
i know it is possible to create the linked list on the cpu and copy it on the gpu.
but is it possible to create the LS on the GPU. ie add nods, delete nods, allocate memory dynamically?

Dats probably da last thing u wanna do on a GPU… With dynamic memory allocation in te latest fermi arch, coupled with atomics, you should be able to do it though…

Dats probably da last thing u wanna do on a GPU… With dynamic memory allocation in te latest fermi arch, coupled with atomics, you should be able to do it though…

In that case, suppose I have a worklist based algorithm in which each thread block generates a set of new values to work on - the number of values generated isnt fixed but maximum is equal to number of threads. This goes on till the worklist is empty. These kind of situations may hog a lot of Global memory If i make the most conservative guess on memory allocation for worklist - wouldnt it? Dynamic mem alloc is what you do generally on a CPU code. Whats the smart alternative for a GPU?

Also whats a smart way to implement worklist type algorithms on GPU - where one run of kernel generates more work. I am looking for something other than re-running the kernel again on the new work - since that would require taking all the worklist data out to CPU first and then re-injecting it into GPU on the next kernel call. (am i correct here?)

Thanks

Sid