Why does glTexture2D() not like GL_ALPHA as internal format?

unsigned int iformat = format == 1 ? GL_ALPHA : format == 3 ? GL_RGB : GL_RGBA;
    unsigned int xformat = format == 1 ? GL_RED : format == 3 ? GL_RGB : GL_RGBA;
    //  GL_ALPHA fails as iformat in glTexImage2D(). Yet, it's listed as a cromulent format in the spec.
    // iformat = GL_RED;
    glTexImage2D(GL_TEXTURE_2D, 0, iformat, w, h, 0, xformat, GL_UNSIGNED_BYTE, NULL);

This code fails for textures of 1 byte per pixel – GL_ALPHA as internal format and GL_RED as external format should be supported according to the specification for OpenGL 4.2: glTexImage2D - OpenGL 4 - docs.gl

If I set the internal format to GL_RED then it’s accepted, but that samples the texture differently than the specified behavior for GL_ALPHA.
(Which I can compensate for in my shader, but … what’s wrong with GL_ALPHA ?)

The renderer/jetpack version is default as delivered from Amazon:
Renderer: NVIDIA Tegra X1 (nvgpu)/integrated
GL version: 4.2.0 NVIDIA 32.1.0
GLSL version: 4.20 NVIDIA via Cg compiler

Btw, “w” and “h” are 256. I think this is a NVIDIA GL driver bug.

Hi,
The spec does not allow for GL_ALPHA to be requested as internal format.
From the spec: glTexImage2D - OpenGL 4 - docs.gl

internalFormat 
Specifies the number of color components in the texture. Must be one of base internal formats given in Table 1, one of the sized internal formats given in Table 2, or one of the compressed internal formats given in Table 3, below.

None of the tables list GL_ALPHA as an accepted internal format.
So, the driver is right in not accepting it.

Also clarified GL_ALPHA is removed from gl3:
https://devtalk.nvidia.com/default/topic/1056512/jetson-nano/why-does-glvertexattribpointer-return-gl_invalid_operation-/post/5357801/#5357801