upload 3 planes from argus camera to gl

I have the following pipeline setup:
An argus camera, that writes frames into a video converter (to convert to RGB)
which uploads it’s output with NvEGLImageFromFd.

To save myself the conversion, i’ve converted my gl shader to take 3 planes (Y, U, and V) as an input, instead of a single RGB plane.

I’ve noticed that the YUV input to the converter is represented as only one DMABUF file descriptor (with planes 1 and 2 left empty). this seems contrary to the V4L API.

How can I get 3 DMABUFS to represent the 3 planes coming from the argus camera? Are All 3 planes represented in the file descriptor that’s returned from iNativeBuffer->createNvBuffer? if so, what are it’s dimentions/offsets?

Thank you very much.

Hi wdouglass,

Sorry for late reply.

argus camera can only output one dmabuffer with 2 planes in it.

Is it necessary for you to use GL shader as converter? We have a cuda converter for yuv2rgb in mmapi sample.

Below thread has method of how to access each plane through EGLimage.

https://devtalk.nvidia.com/default/topic/1027288/jetson-tx1/libargus-crashing-with-cuda-opengl-interop/post/5228961/#5228961

And this one has info about how to bind GL texture to EGLimage.
https://devtalk.nvidia.com/default/topic/1028811/jetson-tx2/export-gl-texture-as-dmabuf/post/5233512/#5233512

When i wrote that original post, I was experimenting with YUV inputs to a rather convoluted rendering pipeline that’s implemented as a series of shaders. I thought i might save myself a bit of latency by doing all of my math in YUV space rather then converting to RGB, doing my rendering, and then converting back to YUV for my output.

Turns out that the latency of the VIC conversion hardware is perfectly acceptable, and i’ve got a working implementation in RGB space.

Thanks anyway for your reply!