So... Has there been a CUDA-based allocator class successfully written?

I wrote some code in the past that would’ve benefited heavily from a more managed allocation strategy. I had a lot of disparate writes (disparate reads are forgivable, imo) and I think something to the effect of an allocator class could’ve helped me alleviate that cost.

Has anything like this been written in CUDA yet? Is the idea preposterous? It sounds preposterous or at least a little bit but I was curious what other people thought about it.


Hey, thanks! I’m so glad there’s actually research being done!

Why not try your hand at writing your own custom allocator class, even if the idea seems “preposterous” to you at first glance. As G. B. Shaw famously stated: “The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”

I am planning to start work on a CUDA memory allocator host-side library in modern C++, in the coming months.

It will be focused on my research work, into using GPUs in analytic DBMSes, i.e.:

  • Most memory taken up by a small number of very large areas
  • Areas intended for smaller data can become fragmented for better allocation speed (since they're all released together as a query execution ends)
  • Mostly prospective allocation, i.e. "I will need X MB between time sequence points t_1 and t_2" or perhaps even with some kind of linkage to a task execution graph detailing allocation lifetimes.
  • Since allocation doesn't scale up in the time it takes as data scales up, performance will be a secondary consideration at first relative to API neatness and features. Of course I would rather make it super-fast to the extent I can.

The chances of me actually going through with this work are, say, 70%. If you are interested in collaborating on this, I would be very glad to do this in collaboration with others, and of course let the focus shift somewhat towards their personal/group interests as long as it’s useful enough for me.