I am trying to extract either the G channel from an ABGR image, or the Y channel from a YUV image in LibArgus. My code uses a producer/consumer thread model, as per some of the examples and in the original version of the ConsumerThread::threadExecute() method, I was using the original code:
m_dmabuf = iNativeBuffer->createNvBuffer(iStream->getResolution(), NvBufferColorFormat_ABGR32, NvBufferLayout_Pitch);
to extract the ABGR image data and later processing this to create an OpenCV Mat and then convert this to a grayscale version using OpenCV cvtColor.
Profiling this shows the color conversion takes about 50mS (!) which is not going to cut it if I want a low latency, real-time frame conversion process.
I tried replacing the code in the consumer thread to extract a YUV image using
iNativeBuffer->createNvBuffer(iStream->getResolution(), NvBufferColorFormat_NV12, NvBufferLayout_Pitch);
and the same with NvBufferColorFormat_NV21, hoping to use the 0th plane as a reasonable grayscale proxy but the results I get show a completely black image.
Am I barking up the wrong tree here? I need a quick way to extract the grey(ish) component of each image from the sensors I am using. I think the G channel might work, but I can’t find an NvBufferColorFormat that returns RGB as three planes and OpenCV is too slow to do the extraction (as is a brute-force copy and skip three bytes approach).
Any help would be most appreciated.