I’m attempting to work with an OpenGL engine that loads up network content dynamically. Because I have no knowledge beforehand of what the ultimate texture memory load will be I’ve been attempting to use sparse textures to reduce the load of existing textures when I add new ones and go over my pre-defined texture memory budget. When I load a new texture I compute the overall consumed texture memory, and if I’m over budget I find the texture with the largest number of active mips, and un-commit it’s highest mip level, then set the GL_TEXTURE_BASE_LEVEL for that texture to 1 higher than it’s previous value.
When I first started working with this technique, it seemed to be behave in the way I expected. However, lately, my tests are showing that it isn’t working as expected. In particular, if I load up a stress test scene on a GeForce 970, I see the texture memory get rapidly consumed until it starts impacting the application performance. Despite the fact that I’m calling glTexturePageCommitmentEXT to deallocate mip levels, external applications don’t report the consumed GPU memory as dropping by any significant amount.
I’ve created a small test app here which reproduces the problem. It allocates 100 copies of a single texture, and then proceeds to strip off all the mip levels below GL_NUM_SPARSE_LEVELS_ARB, so the result should be 100 textures which each only consume 1 page. However, the GPU dedicated memory still reports over ~2.6 GB usage, and doesn’t subsequently drop.
Am I misunderstanding how sparse allocation is supposed to work?