Get raw frame from MIPI/CSI camera via Argus ICaptureSession

Dear developer community!

I’m trying to access a Raspberry Pi Camera v2 connected via MIPI/CSI to a Jetson Nano in an as-performant-as-possible way in C++. We need direct access to the individual pixels because we want to process them selectively. Therefore, we need raw access to each pixel value. To do this, we decided not to use OpenCV or GStreamer but Argus instead. My first question is Would you agree with us that this is the most performant, but still a generalizable way to do this on the Jetson platform?

I had a look at the sample found in /usr/src/jetson_multimedia_api/samples/09_camera_jpeg_capture/main.cpp and did some modifications to it. I can see that it already works to create the CameraProvider, get the CameraDevice, read out the ICameraProperties, create a CaptureSession, create an OutputStream, set the IEGLOutputStreamSettings, and create and enable the OutputStream. However, I’m currently stuck trying to read the frame into a buffer. Here is my sample code:

if (iCaptureSession->waitForIdle() != STATUS_OK)
    printf("Failed to wait for output stream to become idle");
size_t bytesRead = iCaptureSession->readFrame(buffer, bufferSize, 1000 * 1000);
if (bytesRead != bufferSize)
    printf("Failed to read full image data from output stream buffer");
std::ofstream outputFile("output.bin", std::ios::binary);
outputFile.write((char*)buffer, bufferSize);

It seems that readFrame is not available, although I saw it in some documentation. Now, my second question is What am I doing wrong here? Could you point me in the right direction, please?

Thanks a lot in advance!

hello c.uran,

there’s no ways to capture Raw by Argus. please use v4l2-ctl standard control to dump raw directly.
for example,
$ v4l2-ctl -d /dev/video0 --set-fmt-video=width=2592,height=1944,pixelformat=RG10 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=1 --stream-to=test.raw

Hello JerryChang,

Thank you for your reply. So, just for me to understand, Argus is only capable of returning JPEG encoded images, as can be seen in the sample /usr/src/jetson_multimedia_api/samples/09_camera_jpeg_capture/main.cpp?

If that’s the case, what would be the best way to access the raw image data in C++ with v4l2? And do you agree that this is the most performant way to access the image data compared to OpenCV or GStreamer?

Thank you again for your help!

hello c.uran,

may I know what’s the actual use-case for getting Raw?
for example, why you need to direct access to the individual pixels for processing?


Yes, of course. Our Smart City use case aims to achieve network-efficient selective streaming and data analysis in a 5G campus network with a hybrid Edge- and Cloud-infrastructure. This means that our centralized registry decides which pixels (or other kinds of data) should be transmitted from which producers (mostly Jetson Nanos) to which consumers (e.g. Jetson Xaviers, Orins, or servers). It is the consumer’s job to analyze the received data and derive decisions from it. The registry also instructs the producers whether or not they should do any pre-processing of the data (e.g. compression, aggregation, or reduction).
I hope this clarifies our use case and you can recommend the best way to move forward.

Thank you,

hello c.uran,

you may refer to Argus sample, Argus/public/samples/cudaBayerDemosaic.
it’s using CUDA Bayer consumer and connect it to the RAW16 output stream for processing.