On-card memory allocation

Hello all,

Just a quick question here. Has anyone written up an on-card OSS licensed Malloc/free routine for the GPU?

This library could be built using cudaMalloc to initially fill a pool of memory on the video card, then doll it out to threads/blocks on request. I might find myself needing to write a library that does this in the near-future and I was wondering if anyone else has gone down this path as well.

Also a question for Nvidia: Is an on-card memory allocation library forthcoming in a future release of CUDA? I don’t want to wind up duplicating work that is going to be released as part of CUDA in the future.

Thanks!

What is the purpose of re-implementing cudaMalloc?

I suspect that the currently limitation where device pointers cannot be passed between threads would make it difficult to implement a library like that.

I actually mean allocation on the card, as called from within a device or global function. You can’t do that with cudaMalloc.

Do you have an idea of how you might write this? Without atomic operations or locks I am just curious how this might be accomplished…

I don’t think it can be done in CUDA 1.0 without having hard partitions in the memory between multiprocessors. In 1.1 it should be able to be done with an atomic-operation protected stack.

For 1.0, having hard partitions between multiprocessors is not that bad for the things we are working on as we can predict the max memory used by a block, just not a thread.

Eventually it will need atomic operations, but that is for the long-term.

My original posting was more about figuring out if this kind of thing is planned to be released with the next version of CUDA so I do not reinvent the wheel…

Hi,

I planed to write this some time ago, but for some reason never had the time to look into this. I still have some problems at hand that would benefit from such a library, so I am very interested (and will probably write a library in the near future if no one else will do it).

If you are interested we may work together on this…