Is the raw format data obtained through Libargus and MMAPI the original output from the camera sensor?

Hello, our team is planning to develop a driver for an Mipi CSI-2 iTOF camera on the Orin NX platform. Currently, both the device tree and the driver are in an initial working state. Using the gst+nvarguscamerasrc command, we can see that the camera is outputting a data stream.

Since a ToF camera does not require Argus ISP processing like an RGB camera, our goal is to obtain the raw data from the camera and process it to generate depth frames. This camera uses a four-phase method to acquire depth information, meaning that it outputs four sub-frames within one frame period. We need to use these four sub-frames to construct a single depth frame.

To achieve this, we wrote a simple piece of code based on some examples from the libargus library (such as the oneShot example). This code can capture continuous frame data from the sensor. In theory, once we obtain this data, we should be able to proceed with the depth calculations.

Unfortunately, in a previous post I was informed that the raw images acquired via the Argus method are not the actual original output from the sensor. Even when using setEnableIspStage(false) to skip the ISP post-processing stage, the raw data still undergoes de-mosaicing. This suggests the data may be altered, potentially compromising the accuracy of the depth frame.
https://forums.developer.nvidia.com/t/the-rawbayeroutput-sample-appears-to-have-captured-incorrect-raw-image-data/331306/7?u=star_sea

To investigate this further, we conducted a simple test. According to the camera sensor’s manual, there is a MIPI test mode where each pixel sequentially outputs a regular pseudo-random value. In 640x480 mode, the visualization of this pattern looks like this.

We configured the camera into this test mode and compared the raw data obtained through the following two methods:

  1. We wrote code to generate 640x480 data following the rules specified in the sensor manual, where each value is generated according to the documented pseudo-random pattern.
  2. We used an Argus example to capture the raw data output from the sensor.

If the Argus example applied a de-mosaicing operation to the raw data, the results from method (1) and method (2) should differ, right? However, after comparison, we found that the data obtained using both methods was identical.

Does this mean that the Argus example we are currently using is indeed capturing the sensor’s original output? And if so, what does the previously mentioned de-mosaicing operation refer to in this context?

Hello @Star_sea

Have you tried to dequeue buffers using v4l2-ctl, save them into a file, and check their content?

Regards!
Eduardo Salazar
Embedded SW Engineer at RidgeRun

Contact us: support@ridgerun.com
Developers wiki: https://developer.ridgerun.com/
Website: www.ridgerun.com

I had considered this, but for some reason I can’t stream image data with v4l2-ctl, nor can I save raw images; commands like the ones below just hang with no response. I suspect something in my driver or device tree is still incomplete.

v4l2-ctl -d /dev/video2 --stream-count=1 --stream-mmap --stream-to=frame.raw --verbose
v4l2-ctl -d /dev/video2 --set-fmt-video=width=640,height=480 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=100

I’ve ignored the issue for now because nvgstcapture-1.0 and gst+nvarguscamerasrc can display images correctly, so I decided to prioritize getting raw images through libargus. Should I first resolve the v4l2-ctl problem? If capturing data via Argus works, I’d prefer to keep moving forward along that path for the time being.

You need set the bypass_mode=0 after run any argus APP.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.