Data structures on the GPU (heap)

I’ve been trying to find information on how to efficiently maintain different types of heaps on the GPU (eg. Fib Heap, Binary Heap, etc.). Thus far, a search on Google/Scholar shows nothing.

Does anyone have any idea?

I’m confused. can you dynamically allocate DRAM memory in cuda kernel ?
I thought you pre-allocate before kernel execution…
Let me know if I’m wrong.

With Fermi and later, you can run malloc() and free() in device code for dynamic allocation.

Just one question :)

If every thread dynamically allocates memory, then considering so many threads, isn’t it going to use full memory very easily ?

Yeah, probably. I don’t know what the default size of the heap in CUDA is. It is very unusual to use free() and malloc() in kernel code, so most people don’t run into this issue.