Driver crashes application on RTX2080/GTX1080 as soon as dedicated VRAM is full

Hi, I am running into a very similar problem as described in this other forum thread:
Out of memory error thrown by the driver instead of OpenGL

Windows Event-Log:

An application has requested more GPU memory than is
available in the system.
The application will now be closed.

I am getting this error when I allocate big amounts of depth-buffers (e.g. for shadow mapping)

For debugging & reproducing the problem I wrote a minimal utility that allocates 16.384x16384 depth-buffer textures until either the application crashes or a GL out-of-memory error is logged.

But it’s not only that, I am observing inconsistent behavior across different nvidia GPUs when testing this scenario.
When I use the 32bit float depth-buffer format for the textures, then the GPU driver only allocates as many textures as can fit in the dedicated VRAM … the shared video memory is not used at all.

// benchmark.cpp line 35
constexpr auto texFormat = GL_DEPTH_COMPONENT32F;
constexpr auto texLayout = GL_DEPTH_COMPONENT;
constexpr auto texDataType = GL_FLOAT;

The application will then just crash as soon as the dedicated VRAM is full (no GL errors or warnings will be logged)

Using my sample program I tested that this happens on a GTX 1080 and RTX 2080
(in the charts you see the used dedicated VRAM as blue, used shared GPU memory as red and used system RAM as yellow)

But on a GTX 980 ti the driver DOES use the shared video memory of the system and it DOES log an OpenGL error within the application but only as soon as both the dedicated VRAM AND the shared memory pool have been exhausted by the texture allocations (TLDR: no crash on the GTX 980 ti).

Now I’m wondering, is the driver behavior on the GTX 1080 and RTX 2080 a bug ?
Shouldn’t it also try to used the available shared memory just like it does for the GTX 980 ?!

I produced the described results with the latest game-ready driver version 425.31 on the mentioned cards. (I also tested the creator-ready driver on the RTX 2080 … it didn’t seem to make a difference there)

Here is the link to the source of my minimal reproduction / test app that I use: Bitbucket

PS: I also tried to run the same tests with a non-depth texture-format with the same size:

// benchmark.cpp line 29
//constexpr auto texFormat = GL_R32F;
//constexpr auto texLayout = GL_RED;
//constexpr auto texDataType = GL_FLOAT;

Interestingly when using this texture format, only then the shared memory is also used on the GTX 1080 and RTX 2080 … which is what I would have expected also for the depth-buffer formats.
(This is also reproducible in our production code, but by not using the depth-buffer formats one loses the feature of sampler2DShadow in GLSL etc.)

Please let me know if this is a driver bug, or what else one can do to not get a driver crash when running out of VRAM under these conditions.

Thanks,
Regards Wolfgang