I’m implementing a small software stack to run on top of the CUDA driver (not runtime), and I can’t seem to find a memory management function that does what I want. Basically, I would like a function that returns two arrays, containing base memory addresses and their respective sizes for all currently allocated memory. The only “close” function that currently exists is cuMemGetAddressRange(), but to get the functionality I want, I would have to iterate through blocks of memory to get the information, which would probably be somewhat slow.
Since cuMemAlloc() needs some kind of information about currently allocated memory in order to allocate a new block, I would guess that this information is already stored in the driver’s memory, and would just need to be exposed through a function call. Plus, the driver would always have the “most current” information, rather than iterating with cuMemGetAddressRange(), which could become out-of-date.
Is it possible for a function like this to be included in the next CUDA release?