I’m adapting the sample code yuvjpeg
to be able to process the video with opencv. I’m stuck at converting the frames to BGR.
The frames are in PIXEL_FMT_YCbCr_420_888
format. There appear to be 2 buffers with each frame. The output of yuvjpeg
follows:
CONSUMER: Acquired Frame: 20, time 24585281973
CONSUMER: Sensor Timestamp: 24585244997000, LUX: 8.522511
CONSUMER: IImage(2D): buffer 0 (640x480, 640 stride), 07 09 0a 08 07 08 09 0d 09 05 07 0d
CONSUMER: IImage(2D): buffer 1 (320x240, 640 stride), 7b 88 87 89 7a 84 7c 86 7c 8a 7c 80
CONSUMER: Wrote JPEG: 20.JPG
The code that generates this output is:
// Print out image details, and map the buffers to read out some data.
Image *image = iFrame->getImage();
IImage *iImage = interface_cast<IImage>(image);
IImage2D *iImage2D = interface_cast<IImage2D>(image);
for (uint32_t i = 0; i < iImage->getBufferCount(); i++)
{
const uint8_t *d = static_cast<const uint8_t*>(iImage->mapBuffer(i));
if (!d)
ORIGINATE_ERROR("\tFailed to map buffer\n");
Size2D<uint32_t> size = iImage2D->getSize(i);
CONSUMER_PRINT("\tIImage(2D): "
"buffer %u (%ux%u, %u stride), "
"%02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x\n",
i, size.width(), size.height(), iImage2D->getStride(i),
d[0], d[1], d[2], d[3], d[4], d[5],
d[6], d[7], d[8], d[9], d[10], d[11]);
}
Thanks for any ideas.