Hello,
While implementing generic handling of compressed texture formats in our OpenGL code, we stumbled upon a behaviour which does not seem to match OpenGL 4.6 core profile.
From the spec, p579 (emphasis ours):
TEXTURE_COMPRESSED_BLOCK_SIZE: If the resource contains a compressed
format, the number of bytes per block is returned in params. If
the internal format is not compressed, or the resource is not supported, 0 is
returned. Together with the block width and height queries this allows the
bitrate to be computed, and may be useful in conjunction with ARB compressed
texture pixel storage).
When we query it with the following code:
const GLenum internalFormat = ...
GLint blockByteSize = 0;
glGetInternalformativ(GL_TEXTURE_2D,
internalFormat,
GL_TEXTURE_COMPRESSED_BLOCK_SIZE,
1,
&blockByteSize);
Then blockByteSize
is:
- 64 for
GL_COMPRESSED_RED_RGTC1
, which is a 4x4 block at 4 bpp, so the block size is expected to be 8 bytes (64 bits) - 128 for
GL_COMPRESSED_SRGB_ALPHA_BPTC_UNORM
, which is a 4x4 block at 8bpp, so expected block size is 16 bytes (128 bits).
As far as we can tell, it returns the compressed size of the block in bits, not bytes.
From discussion on Khronos forums, it seems our understanding of the spec is correct.
Is it a bug in the drivers we are currently using (NVIDIA GRD 546.33)?