Get raw frame from MIPI/CSI camera via Argus ICaptureSession

Dear developer community!

I’m trying to access a Raspberry Pi Camera v2 connected via MIPI/CSI to a Jetson Nano in an as-performant-as-possible way in C++. We need direct access to the individual pixels because we want to process them selectively. Therefore, we need raw access to each pixel value. To do this, we decided not to use OpenCV or GStreamer but Argus instead. My first question is Would you agree with us that this is the most performant, but still a generalizable way to do this on the Jetson platform?

I had a look at the sample found in /usr/src/jetson_multimedia_api/samples/09_camera_jpeg_capture/main.cpp and did some modifications to it. I can see that it already works to create the CameraProvider, get the CameraDevice, read out the ICameraProperties, create a CaptureSession, create an OutputStream, set the IEGLOutputStreamSettings, and create and enable the OutputStream. However, I’m currently stuck trying to read the frame into a buffer. Here is my sample code:

if (iCaptureSession->waitForIdle() != STATUS_OK)
    printf("Failed to wait for output stream to become idle");
size_t bytesRead = iCaptureSession->readFrame(buffer, bufferSize, 1000 * 1000);
if (bytesRead != bufferSize)
    printf("Failed to read full image data from output stream buffer");
std::ofstream outputFile("output.bin", std::ios::binary);
outputFile.write((char*)buffer, bufferSize);

It seems that readFrame is not available, although I saw it in some documentation. Now, my second question is What am I doing wrong here? Could you point me in the right direction, please?

Thanks a lot in advance!

hello c.uran,

there’s no ways to capture Raw by Argus. please use v4l2-ctl standard control to dump raw directly.
for example,
$ v4l2-ctl -d /dev/video0 --set-fmt-video=width=2592,height=1944,pixelformat=RG10 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=1 --stream-to=test.raw

Hello JerryChang,

Thank you for your reply. So, just for me to understand, Argus is only capable of returning JPEG encoded images, as can be seen in the sample /usr/src/jetson_multimedia_api/samples/09_camera_jpeg_capture/main.cpp?

If that’s the case, what would be the best way to access the raw image data in C++ with v4l2? And do you agree that this is the most performant way to access the image data compared to OpenCV or GStreamer?

Thank you again for your help!

hello c.uran,

may I know what’s the actual use-case for getting Raw?
for example, why you need to direct access to the individual pixels for processing?


Yes, of course. Our Smart City use case aims to achieve network-efficient selective streaming and data analysis in a 5G campus network with a hybrid Edge- and Cloud-infrastructure. This means that our centralized registry decides which pixels (or other kinds of data) should be transmitted from which producers (mostly Jetson Nanos) to which consumers (e.g. Jetson Xaviers, Orins, or servers). It is the consumer’s job to analyze the received data and derive decisions from it. The registry also instructs the producers whether or not they should do any pre-processing of the data (e.g. compression, aggregation, or reduction).
I hope this clarifies our use case and you can recommend the best way to move forward.

Thank you,

hello c.uran,

you may refer to Argus sample, Argus/public/samples/cudaBayerDemosaic.
it’s using CUDA Bayer consumer and connect it to the RAW16 output stream for processing.

Hi c.uran,
I have a similar issue of getting pixels from a camera using Argus, I wanted to avoid multithreading and I came out with this solution (I removed some error checking and some inizialitation, hope it’s clear anyway). I’m just getting still images (photos) whenever a main program needs it. Unfortunately it gets more than 0.3 secs to capture a frame therefore I’m looking for another solution, but maybe it’s helpful anyway.

    /*** Snippet that could be in a constructor, that receives and saves imageSize in a private field ***/
    dmabuf = -1;

    /* Get the camera provider */
    cameraProvider = UniqueObj<CameraProvider>(CameraProvider::create());
    ICameraProvider *iCameraProvider = interface_cast<ICameraProvider>(cameraProvider);
    Ext::IBlockingSessionCameraProvider *iBlockingSessionCameraProvider
        = interface_cast<Ext::IBlockingSessionCameraProvider>(cameraProvider);

    /* Get the camera devices (we'll use only the first one) */
    std::vector<CameraDevice*> cameraDevices;
    ICameraProperties *iCameraProperties = interface_cast<ICameraProperties>(cameraDevices[0]);

    /* Get the supported sensor modes */
    ISensorMode *iSensorMode;
    std::vector<SensorMode*> sensorModes;

    /* Choose the 4th sensor mode (1640x1232) */
    SensorMode *sensorMode = sensorModes[3];
    iSensorMode = interface_cast<ISensorMode>(sensorMode);

    /* Create a blocking capture session */
    captureSession = UniqueObj<CaptureSession>(
    iCaptureSession = interface_cast<ICaptureSession>(captureSession);

    /* Create the settings for the output stream */
    streamSettings = UniqueObj<OutputStreamSettings>(
    IEGLOutputStreamSettings *iEglStreamSettings =
    iEglStreamSettings->setResolution(Size2D<uint32_t>(1664, 1232));

    /* Create the output stream */
    captureStream = UniqueObj<OutputStream>(iCaptureSession->createOutputStream(streamSettings.get()));

    /* Create capture request */
    request = UniqueObj<Request>(iCaptureSession->createRequest());
    IRequest *iRequest = interface_cast<IRequest>(request);

    /* Set the sensor mode for the source */
    ISourceSettings *iSourceSettings = interface_cast<ISourceSettings>(iRequest->getSourceSettings());

    /* Enable the output stream */

    /* Create the frame consumer */
    frameConsumer = UniqueObj<FrameConsumer>(FrameConsumer::create(captureStream.get()));
    iFrameConsumer = interface_cast<IFrameConsumer>(frameConsumer);


    /*** Part of code that could be repeated, as part of a "capture" method ***/
    Argus::Status argusStatus;

    /* Capture a frame from the camera */

    /* Acquire the frame */
    UniqueObj<Frame> frame(iFrameConsumer->acquireFrame(3000000000, &argusStatus));

    if (argusStatus != Argus::STATUS_OK)
        printf("Argus error status: %d", (int)argusStatus);
    IFrame *iFrame = interface_cast<IFrame>(frame);
    Image *image = iFrame->getImage();
    IImage2D *image2D = interface_cast<IImage2D>(image);
    /* Get the IImageNativeBuffer extension interface. */
    NV::IImageNativeBuffer *iNativeBuffer = interface_cast<NV::IImageNativeBuffer>(image);

    /* Get the handle of a native buffer */
    if (dmabuf == -1)
        dmabuf = iNativeBuffer->createNvBuffer(imageSize,
            NvBufferColorFormat_ARGB32, NvBufferLayout_Pitch);

    /* Get the content of the image */
    uint8_t *d = NULL;
    NvBufferMemMap(dmabuf, 0, NvBufferMem_Read, (void**)&d);
    NvBufferMemSyncForCpu(dmabuf, 0, (void**)&d);

    memcpy(buffer, d, imageSize.height() * imageSize.width() *4);

    NvBufferMemUnMap(dmabuf, 0, (void**)&d);


    /*** This part should be in the destructor ***/
    if (dmabuf != -1)


Let me just add that by using
rather than
in the code above, every capture cycle now takes less than 25 ms!

it’s mentioned in developer guide, Argus::ICaptureSession::repeat, is a convenience method that will queue a request whenever the request queue is empty and the camera is ready to accept new requests.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.