(GPU memory fragmentation)Does cudaMemGetInfo and nvidia-smi&nvml return the totoal GPU memory utilization including OpenGL when CUDA-OpenGL-interop?

does cudaMemGetInfo and nvidia-smi & nvml return the totoal GPU memory utilization including OpenGL when CUDA-OpenGL-interop?
Thanks!
met a problem: app crashed after many loops, OpenGL Error"Out of memory" BUT cudaMemGetInfo and nvdia-smi log there is still enough available GPU memory…

I think the free and total memory values returned by cudaMemGetInfo should be accurate regardless of what the GPU is being used for.

I can’t explain the OpenGL error just based on what is mentioned here.

1 Like

ok, Thanks! it might because the GPU memory fragmentation

any expert has experiences on the GPU memory(esp. fragmentation) and CUDA-context working with OpenGL-context?
is there any good project or article about this?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.