Our C++ application can display huge datasets created by scanning 3D objects. We encontered many driver crashes (error code 6 or blue screens) because the size of the data allocated on the GPU can grow beyond the “Total available graphics memory” available to the GPU drivers as reported by the NVIDIA Control Panel (this number includes dedicated, system and shared memory).
Is there a way to monitor the “Current available graphics memory” used by the GPU drivers and prevent further allocation of GPU data by our application?
I’ve found many ways in C++ to get the current available dedicated video memory, but none for the current available graphics memory.
Does anybody know a way to get this information?