How to force CUDA to use memory freed with glDeleteTextures or the incomprehensible title

Hi all,

I am using CUDA with OpenGL.

I allocate a big 3d texture with OpenGL. When I delete it by using “glDeleteTextures” the memory is not really freed but it is marked as “not used” by opengl that will be able to re-allocate it when needed (like allocating a new texture for example).

My problem is I’d like to allocate this memory for a cudaArray but not for an opengl object. And if openGL knows that this memory is usable, cuda doesn’t seem to know and returns “out of memory”.
A call to cudaMemGetInfo before and after the glDeleteTextures gives the same amount of free memory (so cuda really does not see the memory as free).

Do you have an idea how I can force cuda to use this memory (or to force openGL to really free this memory) ?

Thanks,

– pium

Brute-force method would be destroying and recreating the OpenGl context(Maybe use a special context just for the big texture, and context sharing with the main context. Not even sure it works though).

good idea, it could work!
even if it’s horrible…

Unfortunately, I can’t delete an opengl context containing the texture. Because this last is shared with several others contexts, once it is deleted, I am not able to recreate a context containing the texture and share it with the others existing contexts “on-live”, I am only able to share a context during the construction stage. (I am using QGLWidget to share context, and QGLWidget only take a shared context in the constructor).
If you have an idea on how to do it, I would be glad to hear it!

Or if you have an even better idea, because I am completely bocked and frustrated since I theoretically have enough memory but I can’t use it.

Thanks

–pium