[Solved] Argus Convert YUV420(YcbCr420_888) to RGB

Hallo everyone,

I am looking for a easy way to convert YUV420 to RGB.

I need:

  • capture a image, [OK]
  • write the image to jpeg, [OK]
  • export preview, [NOT OK]

How I do it:

[ARGUS LIB]

EGLStream::IImage iImageData = Argus::interface_castEGLStream::IImage(image);
yuv_to_rgb((char
)iImageData->mapBuffer(), rgb, width, height); // ! Conversion not work !

I take always ‘green’ inexplicably images.

YUV - Wikipedia → also not work

I am looking for: description YUV420 format encoder/decoder or example how to decode or contact to support.

I do not need to display this image on TX1 (only decode this format).

Best regards and thx for help.
yuv2rgb.png

Perhaps someone know what I do wrong in this converter.

void yuv2rgb(char *yuv, char *rgb, int width, int height)
{
// Based on:
// YUV - Wikipedia

int pos   = 0;
int posUV = 0;
char *_y  = yuv;
char *_u  = yuv+(width)*(height);
char *_v  = yuv+width*height+((width+1)*(height+1))/4;

for (int py = 0; py < height; py+=1)
{
    if (py%2 == 0 && py != 0)  // next line
    {
        posUV = (py/2)*width;
    }

    for (int x = 0; x < width; x+=2)
    {
        int posY = (py * width + x);

        char y  = _y[posY   ];
        char y2 = _y[posY+1 ];

        char u  = _u[posUV  ];
        char v  = _v[posUV  ];

        float r  = static_cast<float>(y + (1.370705 * (v - 128)));
        float g  = static_cast<float>(y - (0.698001 * (v - 128)) - (0.337633 * (u - 128)));
        float b  = static_cast<float>(y + (1.732446 * (u - 128)));

        float r2 = static_cast<float>(y2 + (1.370705 * (v - 128)));
        float g2 = static_cast<float>(y2 - (0.698001 * (v - 128)) - (0.337633 * (u - 128)));
        float b2 = static_cast<float>(y2 + (1.732446 * (u - 128)));

        rgb[pos  ] = static_cast<char>(clamp(r , 0, 255));
        rgb[pos+1] = static_cast<char>(clamp(g , 0, 255));
        rgb[pos+2] = static_cast<char>(clamp(b , 0, 255));

        rgb[pos+3] = static_cast<char>(clamp(r2, 0, 255));
        rgb[pos+4] = static_cast<char>(clamp(g2, 0, 255));
        rgb[pos+5] = static_cast<char>(clamp(b2, 0, 255));

        pos+=6;
        posUV+=1;
    }
}
return ;

}

Perhaps someone have Pixel Format YcbCr420_888 specification ?

Exist any alternative way to collect Camera Image Buffer Data in Real Time ?

I am looking for a contact to someone who knows the tegra argus library.
Perhaps someone from the moderators knows, how to contact them.

Hi,
only to be sure.

This two line should give me Image in streamPixelFormat [Supported only YUV420_888] and with streamResolution ?

EGLStream::IImage *iImageData = Argus::interface_castEGLStream::IImage(image);
iImageData->mapBuffer();

Now Buffer Counter tells my that I have 3x Buffer with this size:
ie. resolution: 720x560
0-Buffer Size = 524288
1-Buffer Size = 131072
2-Buffer Size = 131072

ie. resolution: 640x480
0-Buffer Size = 393216
1-Buffer Size = 131072
2-Buffer Size = 131072

Why Buffer 0 [y] have this size ?
Why Buffer 1-2 [uv] have still the same size ?

According to YUV420 Specification,
ie. resolution 720x560
0-Buffer should have 720560 = 403200
1-Buffer should have 720
560/4 = 100800
2-Buffer should have 720*560/4 = 100800

Am I wrong ?

Result from OpenCV library [cvtColor(…)].

And now it works !
Thanks guys for help and all tips.

What was the resolution to this thread? It seems to be missing half of the conversation.

When I use a stream resolution of 692x520, I get only 2 buffers and iimage->getBufferSize(0) returns 524288 which does not seem to align with this plane being a grayscale image at the stream resolution.

Hi dtok,

I ran into the same issue. How did you figure this out?

It seems that these rubbish output arise from the ‘block linear’ returned data type of mapbuffer() so that they cannot be sent directly to glTexImage2D to do texture sampling.

Hi dtok,

I ran into the same issue. How did you figure this out?

Ohh, sorry for the duplicate message because of the poor internet connection.