Since driver 565.90 (the newest at the time of writing) (with an RTX 3080) imageSize in GLSL does not return the correct value. The driver before this, and all before that (for atleast multiple years) did not have this bug.

The shader looks like this:

```
#version 460 core
layout(local_size_x = 8, local_size_y = 8, local_size_z = 1) in;
layout(set = 0, binding = 0) uniform samplerCube srcImage;
layout(set = 0, binding = 1) restrict writeonly uniform imageCube dstImage;
vec3 ComputeTextureCoords(uvec2 size) {
const vec2 st = (vec2(gl_GlobalInvocationID.xy) + 0.5f) / vec2(size);
const vec2 uv = 2.0f * vec2(st.x, 1.0f - st.y) - vec2(1.0f);
const vec3 coords[6] = vec3[](
vec3( 1.0f, uv.y, -uv.x),
vec3(-1.0f, uv.y, uv.x),
vec3( uv.x, 1.0f, -uv.y),
vec3( uv.x, -1.0f, uv.y),
vec3( uv.x, uv.y, 1.0f),
vec3(-uv.x, uv.y, -1.0f)
);
return normalize(coords[gl_GlobalInvocationID.z % 6u]);
}
void main() {
const uvec2 pixel = gl_GlobalInvocationID.xy;
const uvec2 dst_size = uvec2(imageSize(dstImage).xy);
if (all(lessThan(pixel, dst_size))) {
const vec3 uvw = ComputeTextureCoords(dst_size);
const float texel_size = 1.0f / float(dst_size.x);
uint taps = 0u;
vec4 sum = vec4(0.0f);
const int sample_size = 3;
// TODO: optimize
for (int i = -sample_size; i <= sample_size; i++) {
for (int j = -sample_size; j <= sample_size; j++) {
for (int k = -sample_size; k <= sample_size; k++) {
sum += texture(srcImage, uvw + (vec3(i, j, k) * vec3(texel_size)));
taps++;
}
}
}
const vec4 filtered = sum / float(taps);
const uint current_face_index = gl_GlobalInvocationID.z % 6u;
imageStore(dstImage, ivec3(pixel, current_face_index), filtered);
}
}
```

Where `srcImage`

and `dstImage`

are both always the same image, but `srcImage`

is always 1 mip level before the mip level of `dstImage`

.

This line: `const uvec2 dst_size = uvec2(imageSize(dstImage).xy);`

always used to get the correct result, but since driver 565.90, it always returns the size of the largest mip. Even when the descriptor contains an `VkImageView`

with a different mip level than 0.

So with a texture size of `1024`

, `srcImage`

has a size of `1024`

and `dstImage`

a size of `512`

. However the `imageSize(dstImage)`

call returns `1024`

.

I worked around this bug by using a push constant for the target mip size, which **does** work. I have not tested whether `textureSize`

has the same bug.

When I debug in RenderDoc, by stepping through the code, it does say that `imageSize`

returns `512`

(for mip level 1) but that is probably some sort of emulation and not the real value the driver had put out.

I also tried this on a different PC (also RTX 3080), with a driver from last month (September 2024) which also confirmed that this is a new bug in the newly released driver.