We are experiencing like a memory leak using the glMaxShaderCompilerThreadsARB

If we use the multi-threat option to load our OpenGL game shaders the memory used for the game rises to use all computer memory and it’s not released after the load.
This doesn’t happens if we don’t use this functionality and load the shaders sequentially.
This happens in windows 10 with the latest Nvidia drivers 536.40

Any advice about this behaviour ?

Hi there @XavierG and welcome to the NVIDIA developer forums!

Did you check OpenGL and multithreading - OpenGL Wiki to see some guideline on multiple contexts?

Shaders possibly access a lot of GL resources and if you make an error in how your threads handle allocations, you will easily drop or overwrite references.

But this is all guesswork, would it be possible for you to share the relevant code bits of your shader loader?

Hi @MarkusHoHo, our shader loading always runs on the main thread. However, huge memory leaks occur if we call the glMaxShaderCompilerThreadsARB function with parameter 0xFFFFFFFF before loading shaders. In other case everything works fine but …very slow cause the driver is not using threads to do the compilation of shaders.

Hi @MarkusHoHo , any new info about this problem ?
We are getting a lot of bad reviews in our game because the higher shader loading times…
But we can’t use the parallel compiling because the memory leak…
As we see it, it’s the memory’s driver that has a leak, because we can reproduce it if we use the
glMaxShaderCompilerThreadsARB function.
We can provide a game key and a developer version access with the parallel compiling that causes the leak to anyone at NVIDIA. Let us know , please…

Hi @MarkusHoHo, this is the snippet of our code…

GLhandleARB ShaderCreate(const char* sVertexShader, const char* sPixelShader, const char* sGeometryShader)
{
	
	GLhandleARB hShader = glCreateProgram();
	Assert(hShader);

	GLhandleARB hVertex(0);
	GLhandleARB hFragment(0);
	GLhandleARB hGeometry(0);

	// Vertex Shader
	if (sVertexShader != 0)
		hVertex = ShaderCompile(hShader, sVertexShader, GL_VERTEX_SHADER);

	// PixelShader
	if (sPixelShader != 0)
		hFragment = ShaderCompile(hShader, sPixelShader, GL_FRAGMENT_SHADER);

	if (sGeometryShader != 0)
		hGeometry = ShaderCompile(hShader, sGeometryShader, GL_GEOMETRY_SHADER);

	glLinkProgram(hShader);


	if (hVertex)
		glDeleteShader(hVertex);
	if (hFragment)
		glDeleteShader(hFragment);
	if (hGeometry)
		glDeleteShader(hGeometry);
		
	return hShader;
}



GLhandleARB ShaderCompile(const GLhandleARB hShader, const char* sCode, const GLenum nShader)
{
    GLhandleARB hCompile = glCreateShader(nShader);
    GLint nSize( (GLint) strlen(sCode));
    glShaderSource(hCompile, 1, &sCode, &nSize);
    glCompileShader(hCompile);
	glAttachShader(hShader, hCompile);
	return hCompile;
}

Thanks for sharing that code which looks fine, but that is the sequential version without enabling glMaxShaderCompilerThreadsARB, correct?

It would be more important to see the code that according to you creates the Memory leak.

It’s the same code, only with or without calling before the glMaxShaderCompilerThreadsARB(U32_MAX) function…
With the calling Memory goes up to 10Gb, without the call we stay at 2Gb after shader loading

How many shaders are we talking about and how big are they?

And did you try to set glMaxShaderCompilerThreadsARB() to a lower value and force the driver to use a specific number of threads? This function more or less works as a limiter, so using U32_MAX will leave the count up to the underlying driver.

I also reached out to out GL team to see if they are aware of any such issue with this specific extension.

Accessing a full game build of yours might not be feasible from our side. Would it be possible for you to create a small app that would reproduce the problem?

Secondly, you specifically mentioned

Did the leak only start to appear with this driver version?

Around 3000 shaders, the bigger lower than 200k
We try with a value of glMaxShaderCompilerThreadsARB(4), by example, with the same results in memory usage…

We don’t know, it’s the first version we try to use the multiple shader compiling functionality…

Hi @MarkusHoHo, we try with the older version 472.12 and unfortunately the behavior it’s the same.
Without glMaxShaderCompilerThreadsARB(U32_MAX) the program uses 2G after shader loading.
With glMaxShaderCompilerThreadsARB(U32_MAX) the program uses 9G after shader loading.

All right, thanks for the further testing @XavierG.

I passed it on to Engineering now and filed an internal bug report. Someone might contact you for further details.

Can you add details about the system on which you encountered this memory usage?

  • CPU, GPU, RAM,…

And you are certain that the memory would never be released while the game is running?

We can provide a game key and a developer version access with the parallel compiling that causes the leak to anyone at NVIDIA. Let us know , please…

Whoever looks into this will check with you if we can try this approach. But it would be easier if you could produce a minimal app that would show the same behaviour.

Thanks!

We test it in many kind of computer and NVIDIA cards (1650, 1060, 750m…), included virtualized ones…(with a passthorugh NVIDIA card)

Yes, and we take assurance that we don’t have any leak in our end.

It’s easy to reproduce, because occurs at the init of the game in our shader loading screen.

Hi @MarkusHoHo, we have not been contacted yet, for us this is being a serious problem, do you have any idea how long it may take until NVIDIA can look at the problem?

Hi @XavierG

Not able to get a repro with basic OpenGL apps.

Will it be possible to get a sample application for debugging the issue.

Thanks.

Please, @Suvind would you send us a private message with an email to send you the game key and some instructions ?