Stumped on GLSL error C5041 (cannot located suitable resources to bind variable... possibly large array)

Hi all,

I have an OpenGL based graphics engine coming along quite nicely, but I’m currently stumped on a problem that arose after adding a new vertex shader output array.

The vertex shader is outputing the usual things one would calculate per vertex to minimise fragment processing - eg fragment locations in tangent and light space etc which are done on a per light basis with currently an 8 light max per lighting zone. Structure as below.

#version 460
const int cMaxZoneLightCount = 8;

out vData
{
vec3 vFragmentLocationWorld; // This vertice’s location in world coords
vec2 vTextureUV;
vec4 vNormal;
vec4 vFragmentLocationLightSpace[cMaxZoneLightCount]; // Light space location per light (up to 8)
vec4 vFragmentLocationTextureSpace[cMaxZoneLightCount]; // Transform for projective texture lights
vec4 vViewLocationTangent;
vec4 vFragmentLocationTangent; // Tangent space transforms for normal mapping
vec4 vLightLocationTangent[cMaxZoneLightCount]; // Light locations in tangent space (up to 8)
vec4 vLightMRPTangent[cMaxZoneLightCount]; // MRP for tube lights in tangent space (up to 8)
vec4 vTangent;
vec4 vBitangent; // Pre computed vertex tangent/bitangent data.
} vDataOut;

Everything was working fine until I added the vLightMRPTangent[cMaxZoneLightCount] array. Adding that brings the total number of vertex outputs to 153 and I receive error C5041 when the shader compiles - “cannot located suitable resources to bind variable… possibly large array”.

This is strange though because an int mvoc = GL_MAX_VERTEX_OUTPUT_COMPONENTS returns 37,154.

Now, the interesting thing is - if I make vLightMRPTangent[2] instead of 8, this brings the number of vertex shader outputs down to 129 and it works. Similarly, a vLightMRPTangent[3] which would put it up to 133 output components generates a different but verbose and nasty link failure message.

So it’s looking a lot like my still formidible GTX970 only wants to send 128 components down the pipeline!!

I’m developing on Windows 10 with GLEW v2.1 and the November 2019 nVidia driver. I’m not using any other extensions/libraries, eg glm/glut (doing all my matrix work myself).

I can get around this by implementing subsequent features per pixel rather than per vertex, but that’s a real pain because its going to result in sub-optimal efficiency which could become a big problem later.

Any ideas anyone?

Cheers,
MWS

Solved with help from Stack Exchange - we shouldn’t use vertex shaders to do any lighting calculations, as although theoretically it’s an optimisation, the work belongs in the fragment shader and should be done there.