I create an EGL RGBA32 texture, but the alpha is always 1.0 when checked in the shader?

Here is the code I use to create the texture:

NvBufferCreateParams params = {
     .width = TEXTURE_WIDTH,
     .height = TEXTURE_HEIGHT,
     .payloadType = NvBufferPayload_SurfArray,
     .memsize = TEXTURE_WIDTH * TEXTURE_HEIGHT * 4,
     .layout = NvBufferLayout_Pitch,
     .colorFormat = NvBufferColorFormat_ARGB32,
     .nvbuf_tag = NvBufferTag_NONE
};
NvBufferCreateEx(&f_texture_fd, &params));

Then I want to draw a picture in that area so I use the following to map the texture to a memory pointer:

NvBufferMemMap(
          f_texture_fd
        , 0
        , NvBufferMem_Read_Write
        , reinterpret_cast<void **>(&f_texture));

NvBufferMemSyncForCpu(
          f_texture_fd
        , 0
        , reinterpret_cast<void **>(&f_texture));

The second line is there to make sure that the memory buffers are properly synchronized.

In a similar manner, I will unmap the texture with the following, as we can see, I also have a synchronization call so the GPU sees my changes:

NvBufferMemSyncForDevice(f_texture_fd, 0, reinterpret_cast<void **>(&f_texture));

ckt(NvBufferMemUnMap(
          f_texture_fd
        , 0
        , reinterpret_cast<void **>(&f_texture)));
f_texture = nullptr;  // pointer was invalidated

That unmapping happens right after I finished drawing that image.

Here is an example showing how I render an image into the texture:

std::uint8_t * output(f_texture);
for(int idx(0); idx < TEXTURE_WIDTH * TEXTURE_HEIGHT; ++idx, output += 4)
{
    output[0] = blue;
    output[1] = green;
    output[2] = red;
    output[3] = alpha;   // <-- whatever I put here, in the fragment shader `tex.a == 1.0`
}

Now I can work on the rendering. Since I use EGL, first I have to have a vertex shader:

precision mediump float;

varying vec2 interp_tc;

// the input position includes (x,y) for the vertex and (x,y) for the texture
attribute vec4 in_position;

void main()
{
    interp_tc = in_position.zw;
    gl_Position = vec4(in_position.xy, 0, 1);
}

and then a fragment shaders:

#extension GL_OES_EGL_image_external : require

precision mediump float;

varying vec2 interp_tc;
uniform samplerExternalOES tex;

void main()
{
    gl_FragColor = texture2D(tex, interp_tc);
}

Finally, here is the code I use to render the texture:

glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
f_egl_image = NvEGLImageFromFd(
              f_egl_display
            , f_texture_fd);
glUseProgram(f_program);
glActiveTexture(f_texture_id);
glBindTexture(GL_TEXTURE_EXTERNAL_OES, f_texture);
panel_context::glEGLImageTargetTexture2DOES(GL_TEXTURE_EXTERNAL_OES, f_egl_image);
glDrawArrays(GL_TRIANGLES, 0, 6);
glUseProgram(0);

glSwapBuffers();

NvDestroyEGLImage(f_egl_display, f_egl_image);

The render works just fine for the RGB part (I get the correct colors), but I see nothing through even where the alpha is not 255 (1.0) in my texture. To test, I even changed that loop above to use rand() like so:

    output[3] = rand();   // use some random alpha

That should give me a fluctuating alpha channel which on the screen would show up as a mix of the background and this image. The image still appears 100% solid.

Also, the blending works just fine since I can tweak the alpha channel in the fragment shader and it works as expected, so for example I could tweak the shader like so:

if(texture2D(tex, interp_tc).a == 1.0)
{
    gl_FragColor = vec4(1.0, 0.75, 0.0, 0.3);
}
else
{
    gl_FragColor = texture2D(tex, interp_tc);
}

and the image is orangy because the input alpha is always 1.0 and the FragColor is set to a quite transparent orange. I can see through that orange as expected (i.e. the 0.3 is correct and works as I’d expect).

I also tried the following:

gl_FragColor = vec4(texture2D(tex, interp_tc).r,
                    texture2D(tex, interp_tc).g,
                    texture2D(tex, interp_tc).b,
                    0.5);

and sure enough, the image appears with 50% transparency, so I see the background through my extra texture.

In other words, I can make the RGB and the alpha work, only the tex texture does not seem to carry the alpha I put into it.

Reading Fragment shader always uses 1.0 for alpha channel makes me think that somehow accessing my texture does an equivalent to:

vec4(r, g, b, 1.0);

But I don’t use a depth buffer or any special magic and I clearly allocate an NvBufferColorFormat_ARGB32 buffer for my texture.

Is the alpha not supported in an NvBuffer, even when we use ARGB32? (since it is marked as legacy and the only other color format with alpha (ABGR32) is also marked as legacy… maybe your NvBuffers don’t support that simple feature?)

Hi,
Your understanding is correct. The alpha channel is ignored in the usecase. We suggest create another NvBuffer(say new_texture_fd) and after you apply this to f_texture_fd:

std::uint8_t * output(f_texture);
for(int idx(0); idx < TEXTURE_WIDTH * TEXTURE_HEIGHT; ++idx, output += 4)
{
    output[0] = blue;
    output[1] = green;
    output[2] = red;
    output[3] = alpha;   // <-- whatever I put here, in the fragment shader `tex.a == 1.0`
}

Please call NvBufferTransform(new_texture_fd, f_texture_fd). In new_texture_fd, you should see alpha value be applied to B, G, R and alpha channel becomes 1.0.

Hi DaneLLL,

I’m sorry, but I don’t understand how the NvBufferTransform() would help. I could multiply my RGB components with the alpha value, but I need a texture which I can render over another texture. My output already has a background rendered and I need to now render this buffer over with an alpha channel which vary as defined in the output[3] plane. If that plane can’t make it to the shader, then that won’t work for me.

Would I instead need to use a regular OpenGL texture?

Thank you.
Alexis