Hello there,
I am working on an application that is using the argus library to extract image data from the raspberry pi camera module v2 (for this moment).
As a capturing pixel format I use YUV420
streamSettings = UniqueObj<OutputStreamSettings>(iCaptureSession->createOutputStreamSettings(STREAM_TYPE_EGL));
iStreamSettings = interface_cast<IEGLOutputStreamSettings>(streamSettings);
if (iStreamSettings) {
iStreamSettings->setPixelFormat(Argus::PIXEL_FMT_YCbCr_420_888); // See Types.h line 274 in /usr/src/nvidia/tegra_multimedia_api/include/Argus/
iStreamSettings->setResolution(iSensorMode->getResolution());
iStreamSettings->setMode(EGL_STREAM_MODE_FIFO);
iStreamSettings->setMetadataEnable(true);
}
I use a Consumer Thread model like in the samples.
And like in the samples grabbing it with the ImageNativeBuffer is working too:
fd = iNativeBuffer->createNvBuffer(iSensorMode->getResolution(), NvBufferColorFormat_YUV420, NvBufferLayout_Pitch, NV::ROTATION_0);
unsigned char* pdata = NULL;
NvBufferMemMap(fd, 0, NvBufferMem_Read, (void**) &pdata);
NvBufferMemSyncForCpu(fd, 0, (void**) &pdata);
NvBufferParams params;
NvBufferGetParams(fd, ¶ms);
After this, I print a lot of the information I need from params
:
nv_buffer ptr = 0x7f98001f10
Buffer Size = 1008
Pixel Format = 0
Num Planes = 3
Width[0] = 1920
Height[0] = 1080
Pitch[0] = 2048
Offset[0] = 0
PSize[0] = 2228224
Layout[0] = 0
Width[1] = 960
Height[1] = 540
Pitch[1] = 1024
Offset[1] = 2228224
PSize[1] = 655360
Layout[1] = 0
Width[2] = 960
Height[2] = 540
Pitch[2] = 1024
Offset[2] = 2883584
PSize[2] = 655360
Layout[2] = 0
So, my assumption now is:
The 3 Planes are Y, U and V. So the width[0] ist the width of my Y channel, the pitch takes in consideration the padding after each row. And then there is an empty region after the channel data until the next channel at params.offset[i]
starts.
This also seems fit with the sizes of the U and V channels.
I need to make the data continous (without padding and without empty areas between the channels). So I start copying all the image data from Y which works well:
pImg = new unsigned char[params.width[0] * params.height[0]
+ params.width[1] * params.height[1]
+ params.width[2] * params.height[2]];
for (unsigned int i = 0; i < params.height[0]; i++) {
for (unsigned int j = 0; j < params.width[0]; j++) {
pImg[count++] = pdata[i * params.pitch[0] + j];
}
}
In the U and V channels though I get segfaults in the memory given by the params:
for (unsigned int i = 0; i < params.height[1]; i++) {
for (unsigned int j = 0; j < params.width[1]; j++) {
pImg[count++] = pdata[params.offset[1] + i * params.pitch[1] + j]; // this crashes at pdata, not at pImg, this is tested
}
}
This is the U channel. The same holds true for the V channel.
Like with all segfaults, sometimes it passes the U channel and fails at the V channel, sometimes not.
I checked it multiple times. Conforming to the information from params
I get the segfaults in memory regions, where I am still in the first half of the channel data.
And before someone asks:
Yes I really do need YUV, because that is needed by a framework I am working with and wich is essential in my work.
The advantage lies on hand: I can either use gray image, just using the Y channel or extend it to color with the U and V channels.
My question now is:
So what did I do wrong? How do I access the U and V channels correctly?