Argus API Help

I think we are producing and consuming frames correctly. However we don’t want to use the standard PreviewCaptureThread or some of the other sample streams for viewing a frame. We need to bring in the raw frame buffers to our application, shuttle some to TRT, and then shuttle the rest of them out to our v4l2loopback device.

We have gotten as far as getting the IFrame object via the “iFrameConsumer->acquireFrame()” function call, but we don’t quite understand the procedure for getting the raw frame into a usable format (I420 preferably) that we can manipulate and utilize.

Can someone provide a code snippet that takes that frame and gets the raw buffer image that we can write out to v4l2loopback in NV12 or, better yet, I420 format?

Here was our first attempt that worked, but was not formatted correctly, or quickly.

#include <Argus/Argus.h>

#include <EGLGlobal.h>

#include <EGLStream/EGLStream.h>

#include <EGLStream/NV/ImageNativeBuffer.h>

#include <linux/videodev2.h>

#include <sys/mman.h>

const Argus::Size2D <uint32_t> Default_Frame_Resolution {1280, 720};

Argus::IStream* const iStream = Argus::interface_cast<Argus::IStream>(m_output_stream);

EGLStream::IFrameConsumer* const iFrameConsumer = Argus::interface_cast<EGLStream::IFrameConsumer>(m_consumer);

while (!isProgramInterrupted())


// wait for a frame capture to arrive

Argus::Status status {Argus::STATUS_OK};

Argus::UniqueObj<EGLStream::Frame> frame(iFrameConsumer->acquireFrame(Argus::TIMEOUT_INFINITE, &status));

EGLStream::IFrame* const iFrame = Argus::interface_cast<EGLStream::IFrame>(frame);

EGLStream::NV::IImageNativeBuffer* const iNativeBuffer = Argus::interface_cast<EGLStream::NV::IImageNativeBuffer>(iFrame->getImage());

// >>> from here on is where we are not certain ... <<<

const int fd = iNativeBuffer->createNvBuffer(
NvBufferLayout_Pitch, // NvBufferLayout_BlockLinear

NvBufferParams params;

NvBufferGetParams(fd, &params);

const int fsize = params.pitch[0] * Default_Frame_Resolution.height();

char* buffer = (char*) mmap(NULL, 1.5*fsize, PROT_READ | PROT_WRITE, MAP_SHARED, fd, params.offset[0]);

m_v4l2.write(buffer, fsize);



Do you want to capture raw(bayer) data by argus? Not support yet.


How about just raw NV12 or I420? How is it that we can retrieve the frames via gstreamer by appsink method but not by your API?


I think you do in fact share the raw bytes in a specific format. This block of code prints out the raw bytes for each buffer in the image.

// Print out image details, and map the buffers to read out some data.
        Image *image = iFrame->getImage();
        IImage *iImage = interface_cast<IImage>(image);
        IImage2D *iImage2D = interface_cast<IImage2D>(image);
        for (uint32_t i = 0; i < iImage->getBufferCount(); i++)
            const uint8_t *d = static_cast<const uint8_t*>(iImage->mapBuffer(i));
            if (!d)
                ORIGINATE_ERROR("\tFailed to map buffer\n");

            Size2D<uint32_t> size = iImage2D->getSize(i);
            CONSUMER_PRINT("\tIImage(2D): "
                           "buffer %u (%ux%u, %u stride), "
                           "%02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x %02x\n",
                           i, size.width(), size.height(), iImage2D->getStride(i),
                           d[0], d[1], d[2], d[3], d[4], d[5],
                           d[6], d[7], d[8], d[9], d[10], d[11]);

I understand that the only supported pixel format right now is Argus::PIXEL_FMT_YCbCr_420_888. What exact byte format is that? I need to understand how to interpret the individual buffers returned from mapBuffer. The first (0) is always the proper with and size and stride that we are expecting. The second buffer is always smaller.

How could I got about combining all of those raw bytes or processing them manually from PIXEL_FMT_YCbCr_420_888 into another format?

This sample code show hows to get the YUV420(PIXEL_FMT_YCbCr_420_888) data not bayer raw data.

Below wiki have the detail information.

Yes I understand that. What byte format does that represent? Is that actually NV12? Can Anyone at Nvidia provide an example of how to convert the YCbCr_420_888 to I420 and/or RGB?

The sample code you reference to already configure as I420(PIXEL_FMT_YCbCr_420_888) as output.