Quadro card memory leak under per-frame VBO churn

Hey everybody, I found a pretty severe memory leak on Win11 with a Quadro P4000 when using VBOs. I tried this on other systems and I can’t reproduce it there (tried some AMDs, RTX4xxx and my Thinkpad P15 Gen2 mobile RTX A3000).

Attached is a minimal example that un/binds VBOs and dis/enables VertexAttribArrays on a thread.

This is the part that produces the leak

GLuint vbo = 0;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);

const float verts[] = {-1.f, -1.f, 1.f, -1.f, -1.f, 1.f,
                        -1.f, 1.f,  1.f, -1.f, 1.f,  1.f};
glBufferData(GL_ARRAY_BUFFER, sizeof(verts), verts, GL_STATIC_DRAW);

glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, (const void *)0);

glDrawArrays(GL_TRIANGLES, 0, 6);

glDisableVertexAttribArray(0);

glBindBuffer(GL_ARRAY_BUFFER, 0);
glDeleteBuffers(1, &vbo);

Using a wgl debug context prevents the leak on the Quadro machine:

const int attribs[] = { WGL_CONTEXT_MAJOR_VERSION_ARB,
                        1,
                        WGL_CONTEXT_MINOR_VERSION_ARB,
                        0,
                        WGL_CONTEXT_FLAGS_ARB,
                        WGL_CONTEXT_DEBUG_BIT_ARB,
                        0 };
ctx = wglCreateContextAttribsARB(hdc, nullptr, attribs);

Deleaker points to L133 in the main.cpp glDrawArrays(…); calling nvoglv64.dll DrvPresentBuffers that has a final call to KernelBase.dll LocalAlloc

This is a screenshot from the attached binary that reproduces the leak

This is the same machine with debug context

QuadroMemleak.zip (7.7 MB)

NVIDIA System Information 10-28-2025 14-56-45.txt (3.4 KB)

Systeminfo.txt (8.4 KB)

Edit:

One of our users with a Quadro RTX A5000 reported that downgrading to driver v553.62 solved the leak on his side. We only have a limited access to Quadro cards atm - can’t confirm myself.

1 Like

I am experiencing a similar issue in an application, on a Quardo P620. I did try running your repro, and my card is showing the same behaviour… GL_VERSION: 4.6.0 NVIDIA 581.15. I will try downgrading my driver to see if this is actually a driver issue and not something in our application.

1 Like

I can confirm I see this behavior in our application as well, looking in process monitor the Shared GPU memory grows until the driver reports “An application has request more GPU memory than is available in the system”. We can confirm this on Nvidia cards (Quadro P620, RTX A1000). Downgrading to driver version 572.60 or older does not show this behavior.

So this seems like a memory leak introduced in driver version 572.83.

1 Like

Tested latest driver 581.80 (nov 04), memory leak not fixed in that version.

Edit: Threaded optimizations off or on still leaks. (I can only reply 3 times to a topic as a "new” user so I edit this to reply @harvey9).

Try disabling ‘threaded optimization’ via NVIDIA control panel.

Hi, I am having the same issue that can be solved disabling “threaded optimization”. Is it possible to programmatically disabling that option ? Is there any clue to solve that memory leak even if “threaded optimization” is on ?

Edit : Also I have noticed that my software performance are improved when I disable the “threaded optimization”, Is this normal ?

We are experiencing a similar issue in our application after driver version 572.83 where a memory leak occurs on freeing and re-creating vbos continuously. In our application disabling threaded optimization stops the memory leak.

I compiled and ran the test program the user posted here and do see the leak, but turning off threaded optimization does not fix it (like another user said.)

Tested with:

NVIDIA RTX A2000 Laptop GPU WDDM

Driver version 582.16 (date 19 dec 2025)

We are also experiencing this issue. I believe this has to do with notification messages produced by the debug output support. With DebugOutput DISABLED, it appears that these notifications are piling up in a buffer on the driver (e.g., vertexbuffer allocated on System Heap (fast). The memory leak goes away when you ENABLE DebugOutput. It’s stable with Notification messaging turned on or off.

A notification message is created whenever data moves from CPU to GPU, so every call to glMapBuffer or glBufferData will result in a message. If you’re constantly mapping host visible buffers for read/write every frame, you’re going to see memory grow until this driver issue is resolved.

This seems to be fixed with 595.71

The issue is still there for me at 595.97 using RTX A2000 Laptop GPU