Sparse texture memory management

I’m attempting to work with an OpenGL engine that loads up network content dynamically. Because I have no knowledge beforehand of what the ultimate texture memory load will be I’ve been attempting to use sparse textures to reduce the load of existing textures when I add new ones and go over my pre-defined texture memory budget. When I load a new texture I compute the overall consumed texture memory, and if I’m over budget I find the texture with the largest number of active mips, and un-commit it’s highest mip level, then set the GL_TEXTURE_BASE_LEVEL for that texture to 1 higher than it’s previous value.

When I first started working with this technique, it seemed to be behave in the way I expected. However, lately, my tests are showing that it isn’t working as expected. In particular, if I load up a stress test scene on a GeForce 970, I see the texture memory get rapidly consumed until it starts impacting the application performance. Despite the fact that I’m calling glTexturePageCommitmentEXT to deallocate mip levels, external applications don’t report the consumed GPU memory as dropping by any significant amount.

I’ve created a small test app here which reproduces the problem. It allocates 100 copies of a single texture, and then proceeds to strip off all the mip levels below GL_NUM_SPARSE_LEVELS_ARB, so the result should be 100 textures which each only consume 1 page. However, the GPU dedicated memory still reports over ~2.6 GB usage, and doesn’t subsequently drop.

Am I misunderstanding how sparse allocation is supposed to work?

So I’ve discovered that the system level reported GPU memory isn’t a reliable metric for determining how much committed memory is in use. If I follow up my previous work of allocating, committing and then de-committing memory with a bunch of new texture allocations, the used GPU memory does NOT increase, indicating that the new textures are clearly re-using the memory made available by de-committing mip levels from the earlier textures.

So that’s good.

However, I’m noticing that the consumed memory for N textures using sparse allocation seems to be about 20% higher than if I allocate the same N textures using non-sparse. This is somewhat disappointing since it means I can trade off flexibility with allocating and deallocating textures but only at a cost of lower quality since I can’t fit as many mip levels in memory as previously.