Odd hardware configuration does not meet minimum specifications

Good evening.

I am little test application where I have 3 significantly large textures and I am using these to map an arbitrary object. Unfortunately starting the second render frame it crashes often with this error. When I mean often really mean that in rare cases it works, using the very same unchanged code. Here is the situation:

Hardware:
Nvidia GTX 660 2GB (or 4), supports opengl 4.4 and textures of size 2048 according to the glGet
16GB RAM

Textures tested the different combinations:
3x 2D texture arrays of size 4096x4096x10, no mipmaps.
3x 2D texture arrays of size 2048x2048x40, no mipmaps.
6x 2D texture arrays of size 2048x2048x20, no mipmaps.

These are all the same textures but with different configurations and eventual corresponding triangle->texture mapping, trying to reduce the size of each texture by adding depth and then by splitting the textures into two texture units each.

I have a vao and an array buffer where I update the currently visible triangles by doing some culling, this code doesn’t seem to be the problem as when I remove one/two of the used textures the code always runs correctly, leading me to believe that my problem is with the textures.

I am using
glTexStorage3D(GL_TEXTURE_2D_ARRAY, 1, InternalFormat,Width, Height, Depth);
and
glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, TextureDepthNum, Width, Height, 1, Format, Type, Data);
for allocating and filling the textures respectively, with InternalFormat being GL_RGB8,format GL_RGB and Type GL_UNSIGNED_BYTE.

The textures are all loaded into Data correctly. No errors are being thrown by OpenGL, checking after every call with glGetError and in the shader compilation and link as per defined in the documentation.

I am definitely confused as to what might be bringing this error about, and how I should fix it. I hope I explained enough of my situation so that I won’t waste much of your time, but any more information that you might require I am happy to provide.

Thank you very much for the attention, I hope someone can help me get over this problem :D

This error can happen when you try to execute a command stream which needs more resources than the Windows operating system can allocate on the graphics board.
Or expressed like in the error message, using a graphics board below the minimum specifications required to run the intended application.

The only solution to this is to split the work into smaller pieces (using fewer or smaller textures per draw call) or use graphics boards with more memory.

The texture size in bytes which is required to load each of these configurations is at least:
3 * 4096 * 4096 * 10 * 4 (because RGB8 is uploaded as RGBA8) =
3 * 671,088,640 bytes per texture =
2,013,265,920 bytes == 1920 MB
which most likely won’t fit into a 2 GB board because there are other processes using graphics memory as well.

Also each of the textures would need to find contiguous memory of 640 MB resp. 320 MB in the 6x textures case which can be difficult due to memory fragmentation, so more but smaller textures should find more space.
The Kepler GPU architecture supports bindless textures, means you can also download an arbitrary number of individual textures and address them inside a shader.
You could also use compressed textures if you don’t need the precise data.

Thank you, that does seem to be the issue. The error description seemed a bit vague, and I expected OpenGL to error during the allocation if its memory issues, not during a draw call. Is there a way to get notified of similar errors during a draw call in a way that can be handled by the code, such as with glGetError?
I will definitely look into bindless textures in hope of fixing this issue.
Thank you very much!

No, from OpenGL side everything is fine. It’s the Windows operating system kernel which cannot fulfill the requested drawing action due to lack of resources.

Not rendering or other behaviors than ending the application would be incorrect. What if exactly that contains the interesting data of a tumor in medical imaging and you would not render it?

There is no fix for this issue on a 2 GB board if you need all the texture data at the same time. That simply needs a bigger board to work reliably. => Higher minimum hardware configuration specifications.

Alright understood. Thank you very very much, I will look for better solutions instead of using so much memory for the textures :D