How is image data laid out in memory for mapped buffer


I’m trying to access the pixels for an image taken through the Argus camera API. The example “yuvJpeg” is the only one I could find that directly accessed the pixels of the image. It does this through calling the “mapBuffer()” function.

I can do this for my application, but the image data appears to be in some interleaved order. Furthermore, there is the following comment in the documentation for mapBuffer()

* How this data is laid out in memory may be described by another Frame interface

Where can I find this documentation on how the mapped buffer is laid out in memory? Is mapBuffer() the best way to access the individual pixel values, or is there a better way to do this?


Are you on r24.2.1 or r28.1? Do you want to access via CPU or CUDA?




Please refer to

Thanks. Is there another way to do it or can you post the specification for how the memory is laid out? I’m having trouble including the NvBuffer libraries with my program. I have the Argus library working, but I’m not able to see the rest of the multimedia API.

Please upgrade to r28.1 and use new added APIs in ~/tegra_multimedia_api/include/nvbuf_utils.h

iImage->mapBuffer() does not give pitch linear buffer for CPU accessing.

Thanks. I was able to get this to work with IImageNativeBuffer and nvbuf_utils. This is a bit too slow though and I’m trying to speed it up.

To restate my goal: I want to take the data in the EGLStream image, convert it to the UYVY format, and save the data to another arbitrary location.

Ideally this use-case should only require the data to be copied one time. Runtime is important to me and I need to get rid of any superfluous operations. The current method copies the data twice, which is too slow.

In the current method, IImageNativeBuffer copies the data once (when it creates the NvBuffer), and then I need to copy the data a second time to store it in my final destination.

Is there a way to do this that cuts out the intermediate step of copying the image to an NvBuffer?

In other words, is there a way to set this up such that some Nvidia function copies/converts the data directly from the EGLStream object to an arbitrary memory location, without an extra copy operation in the middle?

Hi EEG, you can use HW converter to do NV12 -> UYVY conversion. Please refer to ~/tegra_multimedia_api/samples/07_video_convert