If we use the multi-threat option to load our OpenGL game shaders the memory used for the game rises to use all computer memory and it’s not released after the load.
This doesn’t happens if we don’t use this functionality and load the shaders sequentially.
This happens in windows 10 with the latest Nvidia drivers 536.40
Shaders possibly access a lot of GL resources and if you make an error in how your threads handle allocations, you will easily drop or overwrite references.
But this is all guesswork, would it be possible for you to share the relevant code bits of your shader loader?
Hi @MarkusHoHo, our shader loading always runs on the main thread. However, huge memory leaks occur if we call the glMaxShaderCompilerThreadsARB function with parameter 0xFFFFFFFF before loading shaders. In other case everything works fine but …very slow cause the driver is not using threads to do the compilation of shaders.
Hi @MarkusHoHo , any new info about this problem ?
We are getting a lot of bad reviews in our game because the higher shader loading times…
But we can’t use the parallel compiling because the memory leak…
As we see it, it’s the memory’s driver that has a leak, because we can reproduce it if we use the
glMaxShaderCompilerThreadsARB function.
We can provide a game key and a developer version access with the parallel compiling that causes the leak to anyone at NVIDIA. Let us know , please…
It’s the same code, only with or without calling before the glMaxShaderCompilerThreadsARB(U32_MAX) function…
With the calling Memory goes up to 10Gb, without the call we stay at 2Gb after shader loading
How many shaders are we talking about and how big are they?
And did you try to set glMaxShaderCompilerThreadsARB() to a lower value and force the driver to use a specific number of threads? This function more or less works as a limiter, so using U32_MAX will leave the count up to the underlying driver.
I also reached out to out GL team to see if they are aware of any such issue with this specific extension.
Accessing a full game build of yours might not be feasible from our side. Would it be possible for you to create a small app that would reproduce the problem?
Secondly, you specifically mentioned
Did the leak only start to appear with this driver version?
Around 3000 shaders, the bigger lower than 200k
We try with a value of glMaxShaderCompilerThreadsARB(4), by example, with the same results in memory usage…
Hi @MarkusHoHo, we try with the older version 472.12 and unfortunately the behavior it’s the same.
Without glMaxShaderCompilerThreadsARB(U32_MAX) the program uses 2G after shader loading.
With glMaxShaderCompilerThreadsARB(U32_MAX) the program uses 9G after shader loading.
All right, thanks for the further testing @XavierG.
I passed it on to Engineering now and filed an internal bug report. Someone might contact you for further details.
Can you add details about the system on which you encountered this memory usage?
CPU, GPU, RAM,…
And you are certain that the memory would never be released while the game is running?
We can provide a game key and a developer version access with the parallel compiling that causes the leak to anyone at NVIDIA. Let us know , please…
Whoever looks into this will check with you if we can try this approach. But it would be easier if you could produce a minimal app that would show the same behaviour.
Hi @MarkusHoHo, we have not been contacted yet, for us this is being a serious problem, do you have any idea how long it may take until NVIDIA can look at the problem?