I create an EGL RGBA32 texture, but the alpha is always 1.0 when checked in the shader?

Here is the code I use to create the texture:

NvBufferCreateParams params = {
     .width = TEXTURE_WIDTH,
     .height = TEXTURE_HEIGHT,
     .payloadType = NvBufferPayload_SurfArray,
     .memsize = TEXTURE_WIDTH * TEXTURE_HEIGHT * 4,
     .layout = NvBufferLayout_Pitch,
     .colorFormat = NvBufferColorFormat_ARGB32,
     .nvbuf_tag = NvBufferTag_NONE
};
NvBufferCreateEx(&f_texture_fd, &params));

Then I want to draw a picture in that area so I use the following to map the texture to a memory pointer:

NvBufferMemMap(
          f_texture_fd
        , 0
        , NvBufferMem_Read_Write
        , reinterpret_cast<void **>(&f_texture));

NvBufferMemSyncForCpu(
          f_texture_fd
        , 0
        , reinterpret_cast<void **>(&f_texture));

The second line is there to make sure that the memory buffers are properly synchronized.

In a similar manner, I will unmap the texture with the following, as we can see, I also have a synchronization call so the GPU sees my changes:

NvBufferMemSyncForDevice(f_texture_fd, 0, reinterpret_cast<void **>(&f_texture));

ckt(NvBufferMemUnMap(
          f_texture_fd
        , 0
        , reinterpret_cast<void **>(&f_texture)));
f_texture = nullptr;  // pointer was invalidated

That unmapping happens right after I finished drawing that image.

Here is an example showing how I render an image into the texture:

std::uint8_t * output(f_texture);
for(int idx(0); idx < TEXTURE_WIDTH * TEXTURE_HEIGHT; ++idx, output += 4)
{
    output[0] = blue;
    output[1] = green;
    output[2] = red;
    output[3] = alpha;   // <-- whatever I put here, in the fragment shader `tex.a == 1.0`
}

Now I can work on the rendering. Since I use EGL, first I have to have a vertex shader:

precision mediump float;

varying vec2 interp_tc;

// the input position includes (x,y) for the vertex and (x,y) for the texture
attribute vec4 in_position;

void main()
{
    interp_tc = in_position.zw;
    gl_Position = vec4(in_position.xy, 0, 1);
}

and then a fragment shaders:

#extension GL_OES_EGL_image_external : require

precision mediump float;

varying vec2 interp_tc;
uniform samplerExternalOES tex;

void main()
{
    gl_FragColor = texture2D(tex, interp_tc);
}

Finally, here is the code I use to render the texture:

glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
f_egl_image = NvEGLImageFromFd(
              f_egl_display
            , f_texture_fd);
glUseProgram(f_program);
glActiveTexture(f_texture_id);
glBindTexture(GL_TEXTURE_EXTERNAL_OES, f_texture);
panel_context::glEGLImageTargetTexture2DOES(GL_TEXTURE_EXTERNAL_OES, f_egl_image);
glDrawArrays(GL_TRIANGLES, 0, 6);
glUseProgram(0);

glSwapBuffers();

NvDestroyEGLImage(f_egl_display, f_egl_image);

The render works just fine for the RGB part (I get the correct colors), but I see nothing through even where the alpha is not 255 (1.0) in my texture. To test, I even changed that loop above to use rand() like so:

    output[3] = rand();   // use some random alpha

That should give me a fluctuating alpha channel which on the screen would show up as a mix of the background and this image. The image still appears 100% solid.

Also, the blending works just fine since I can tweak the alpha channel in the fragment shader and it works as expected, so for example I could tweak the shader like so:

if(texture2D(tex, interp_tc).a == 1.0)
{
    gl_FragColor = vec4(1.0, 0.75, 0.0, 0.3);
}
else
{
    gl_FragColor = texture2D(tex, interp_tc);
}

and the image is orangy because the input alpha is always 1.0 and the FragColor is set to a quite transparent orange. I can see through that orange as expected (i.e. the 0.3 is correct and works as I’d expect).

I also tried the following:

gl_FragColor = vec4(texture2D(tex, interp_tc).r,
                    texture2D(tex, interp_tc).g,
                    texture2D(tex, interp_tc).b,
                    0.5);

and sure enough, the image appears with 50% transparency, so I see the background through my extra texture.

In other words, I can make the RGB and the alpha work, only the tex texture does not seem to carry the alpha I put into it.

Reading Fragment shader always uses 1.0 for alpha channel makes me think that somehow accessing my texture does an equivalent to:

vec4(r, g, b, 1.0);

But I don’t use a depth buffer or any special magic and I clearly allocate an NvBufferColorFormat_ARGB32 buffer for my texture.

Is the alpha not supported in an NvBuffer, even when we use ARGB32? (since it is marked as legacy and the only other color format with alpha (ABGR32) is also marked as legacy… maybe your NvBuffers don’t support that simple feature?)

Hi,
Your understanding is correct. The alpha channel is ignored in the usecase. We suggest create another NvBuffer(say new_texture_fd) and after you apply this to f_texture_fd:

std::uint8_t * output(f_texture);
for(int idx(0); idx < TEXTURE_WIDTH * TEXTURE_HEIGHT; ++idx, output += 4)
{
    output[0] = blue;
    output[1] = green;
    output[2] = red;
    output[3] = alpha;   // <-- whatever I put here, in the fragment shader `tex.a == 1.0`
}

Please call NvBufferTransform(new_texture_fd, f_texture_fd). In new_texture_fd, you should see alpha value be applied to B, G, R and alpha channel becomes 1.0.

Hi DaneLLL,

I’m sorry, but I don’t understand how the NvBufferTransform() would help. I could multiply my RGB components with the alpha value, but I need a texture which I can render over another texture. My output already has a background rendered and I need to now render this buffer over with an alpha channel which vary as defined in the output[3] plane. If that plane can’t make it to the shader, then that won’t work for me.

Would I instead need to use a regular OpenGL texture?

Thank you.
Alexis

Hi,
Please make a patch to existing samples for replicating the issue. So that we can reproduce it and check with other teams. The default behavior is that after modifying the alpha channel in NvBuffer with RGBA format, we would need to call NvBufferTransform() to apply the effect to RGB of another NvBuffer.

Okay so I updated my code and I still don’t get any transparency.

void setup_icons()
{
    Magick::Image icons(Magick::Image("/usr/share/ve/images/controller-icons.png"));
    Magick::Geometry geometry(icons.size());
    int const width(geometry.width());
    int const height(geometry.height());

    f_icons_drawer = std::make_shared<image_shader>(
                      f_panel_context
                    , "icons");
    f_icons_drawer->create_program();

    NvBufferCreateParams temporary_create_params = {
         .width = width,
         .height = height,
         .payloadType = NvBufferPayload_SurfArray,
         .memsize = width * height * 4,
         .layout = NvBufferLayout_Pitch,
         .colorFormat = NvBufferColorFormat_ARGB32,
         .nvbuf_tag = NvBufferTag_NONE,
    };

    int temporary_image_fd(-1);

    ckt(NvBufferCreateEx(
              &temporary_image_fd
            , &temporary_create_params));

    std::uint8_t * image(nullptr);
    ckt(NvBufferMemMap(
              temporary_image_fd
            , 0
            , NvBufferMem_Read_Write
            , reinterpret_cast<void **>(&image)));
    ckt(NvBufferMemSyncForCpu(
              temporary_image_fd
            , 0
            , reinterpret_cast<void **>(&image)));

    icons.write(
              0
            , 0
            , width
            , height
            , "ARGB"
            , Magick::CharPixel
            , image);

    ckt(NvBufferMemSyncForDevice(
              temporary_image_fd
            , 0
            , reinterpret_cast<void **>(&image)));
    ckt(NvBufferMemUnMap(
              temporary_image_fd
            , 0
            , reinterpret_cast<void **>(&image)));

    NvBufferCreateParams create_params = {
         .width = width,
         .height = height,
         .payloadType = NvBufferPayload_SurfArray,
         .memsize = width * height * 4,
         .layout = NvBufferLayout_Pitch,
         .colorFormat = NvBufferColorFormat_ARGB32,
         .nvbuf_tag = NvBufferTag_NONE,
    };

    ckt(NvBufferCreateEx(
              &f_image_fd
            , &create_params));

    NvBufferTransformParams transform_params = {
        .transform_flag = NVBUFFER_TRANSFORM_FILTER | NVBUFFER_TRANSFORM_FLIP,
        .transform_flip = NvBufferTransform_FlipY,
        .transform_filter = NvBufferTransform_Filter_Smart,
        .src_rect = { 0, 0, static_cast<std::uint32_t>(width), static_cast<std::uint32_t>(height) },
        .dst_rect = { 0, 0, static_cast<std::uint32_t>(width), static_cast<std::uint32_t>(height) },
        .session = nullptr,
    };

    NvBufferTransform(
              temporary_image_fd
            , f_image_fd
            , &transform_params);
}

I use this vertex shader, which has a position and scaling feature. However, I do not use those. I set the position to (0, 0) and the scaling to 1.

varying vec2 interp_tc;

// the input position includes (x,y) for the vertex and (x,y) for the texture
attribute vec4 in_position;

// (x, y) used as translation and (z) as the scale
uniform vec3 in_translation_scale;

void main()
{
    interp_tc = in_position.zw;
    float scale = in_translation_scale.z;
    float x = in_position.x * scale + in_translation_scale.x;
    float y = in_position.y * scale + in_translation_scale.y;
    gl_Position = vec4(x, y, 0, 1);
}

Finally, I use this fragment shader which picks the color in the texture:

#extension GL_OES_EGL_image_external : require

precision mediump float;
varying vec2 interp_tc;
uniform samplerExternalOES tex;

void main()
{
    gl_FragColor = texture2D(tex, interp_tc);
}

We can recognize the interp_tc from the vertex shader.

The final result is black instead of transparency even where the alpha channel is 0 (i.e. fully transparent). Just in case, I also tried by applying the alpha to the color components, with and without the extra NvBufferTransform().

So it 100% sounds like you do not support the alpha channel anymore with NvBuffer, which is okay, but maybe it should clearly be documented so people don’t waste their time trying to make use of it.

I uploaded the PNG image to my website here:

It’s white over transparent so it probably will appear just white in this post. Right click to save it and/or get the URL and check that in your browser. Really nothing fancy. A few icons at the top of the image for now.

Let me know how to make it work if possible. So far, from what I can tell, it’s never going to work.

Hi,
Here is a patch for applying alpha blending to NvBuffer:

diff --git a/multimedia_api/ll_samples/samples/12_camera_v4l2_cuda/camera_v4l2_cuda.cpp b/multimedia_api/ll_samples/samples/12_camera_v4l2_cuda/camera_v4l2_cuda.cpp
index 6aed080..b79912c 100644
--- a/multimedia_api/ll_samples/samples/12_camera_v4l2_cuda/camera_v4l2_cuda.cpp
+++ b/multimedia_api/ll_samples/samples/12_camera_v4l2_cuda/camera_v4l2_cuda.cpp
@@ -45,6 +45,7 @@
 #include "camera_v4l2_cuda.h"
 
 static bool quit = false;
+static int inter_fd = 0;
 
 using namespace std;
 
@@ -374,12 +375,14 @@ prepare_buffers(context_t * ctx)
 
     }
 
-    input_params.colorFormat = get_nvbuff_color_fmt(V4L2_PIX_FMT_YUV420M);
+    input_params.colorFormat = NvBufferColorFormat_ABGR32;
     input_params.nvbuf_tag = NvBufferTag_NONE;
     // Create Render buffer
     if (-1 == NvBufferCreateEx(&ctx->render_dmabuf_fd, &input_params))
         ERROR_RETURN("Failed to create NvBuffer");
 
+    NvBufferCreateEx(&inter_fd, &input_params);
+
     if (!request_camera_buff(ctx))
         ERROR_RETURN("Failed to set up camera buff");
 
@@ -485,7 +488,10 @@ start_capture(context_t * ctx)
 
             cuda_postprocess(ctx, ctx->render_dmabuf_fd);
 
-            ctx->renderer->render(ctx->render_dmabuf_fd);
+            NvBufferTransform(ctx->render_dmabuf_fd, inter_fd,
+                        &transParams);
+
+            ctx->renderer->render(inter_fd);
 
             // Enqueue camera buff
             if (ioctl(ctx->cam_fd, VIDIOC_QBUF, &v4l2_buf))
diff --git a/multimedia_api/ll_samples/samples/common/algorithm/cuda/NvAnalysis.cu b/multimedia_api/ll_samples/samples/common/algorithm/cuda/NvAnalysis.cu
index e96be22..d37d231 100644
--- a/multimedia_api/ll_samples/samples/common/algorithm/cuda/NvAnalysis.cu
+++ b/multimedia_api/ll_samples/samples/common/algorithm/cuda/NvAnalysis.cu
@@ -31,16 +31,23 @@
 
 #define BOX_W 32
 #define BOX_H 32
+#define DEMO_H 256
 
 __global__ void
 addLabelsKernel(int *pDevPtr, int pitch)
 {
-    int row = blockIdx.y * blockDim.y + threadIdx.y + BOX_H;
-    int col = blockIdx.x * blockDim.x + threadIdx.x + BOX_W;
-    char *pElement = (char *)pDevPtr + row * pitch + col;
-
-    pElement[0] = 0;
-
+  int row = blockIdx.y*blockDim.y + threadIdx.y;
+  int col = blockIdx.x*blockDim.x + threadIdx.x;
+  if (col <= (pitch/2) && row <= (DEMO_H/2) && (col % 4) == 3) {
+    char * pElement = (char*)pDevPtr + row * pitch + col;
+    pElement[0] = (char)223;
+  } else if (col <= pitch && row <= (DEMO_H/2) && (col % 4) == 3) {
+    char * pElement = (char*)pDevPtr + row * pitch + col;
+    pElement[0] = (char)63;
+  } else if (col <= pitch && row <= DEMO_H && (col % 4) == 3) {
+    char * pElement = (char*)pDevPtr + row * pitch + col;
+    pElement[0] = (char)191;
+  }
     return;
 }
 
@@ -48,7 +55,7 @@ int
 addLabels(CUdeviceptr pDevPtr, int pitch)
 {
     dim3 threadsPerBlock(BOX_W, BOX_H);
-    dim3 blocks(1,1);
+    dim3 blocks(pitch/BOX_W+1,DEMO_H/BOX_H+1);
 
     addLabelsKernel<<<blocks,threadsPerBlock>>>((int *)pDevPtr, pitch);

You should see the effect by running the command:

12_camera_v4l2_cuda$ ./camera_v4l2_cuda -d /dev/video0 -s 640x480 -f UYVY -c

Please check if you can follow this method to modify alpha channel through CUDA.