I am using V4L2 to capture frames into DMA buffers created by NvBufferCreateEx() and NvBufferMemMap(), then rendering them by creating an EGLImageKHR through NvEGLImageFromFd() and using OpenGL ES. I have gotten it to work in some cases, though not in an ideal way. The main reason I am using the multimedia API here is for the low latency that the DMA buffers provide.
My issues all seem to fall to pixel format. The multimedia API does not support my camera’s pixel format ( greyscale ), but I was able to work around that by modifying the kernel to put the incoming MIPI sensor data into a 32 bit buffer. Then I created the DMA buffer as ARGB32 so that NvEGLImageFromFd() did not modify the sensor data, allowing me to extract the real data in the GLSL fragment shader.
Now I am using a usb camera. This camera outputs in a few different YUV formats and resolutions. I have tested the camera in Gstreamer and OpenCV and it renders correctly, but when I try it using the multimedia API it does not convert from YUV to RGB correctly. It does work if the I set the driver resolution as 1280x1024. I am a bit baffled here and have more or less ruled out my code as the cause.
- Is there a way to go from DMA buffer to GLSL shader with the raw data, skipping the automatic conversion?
- Are there any known bugs with the conversion from NV12/YV12/YU12 to RGBA?