Lib Argus Latency

Dear Forum,

I am bringing up a Stereo Visual Inertial Odometry System and the CSI camera sensors are externally triggered. However, I observed a delay/latency > 30ms since the moment applying the triggering signal to the moment I could get the frames. This is contrary to my initial thought as Libargus seems to be part of low-level APIs.

The cameras are global shutter (Sony IMX296) and the trigger pulse is very short (200us) for short exposure. The triggering frequency is 20Hz.

The list of formats :

	[0]: 'RG10' (10-bit Bayer RGRG/GBGB)
		Size: Discrete 1456x1088
			Interval: Discrete 0.017s (60.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.011s (90.000 fps)
		Size: Discrete 728x544
			Interval: Discrete 0.008s (121.000 fps)

Another observation is that reducing the resolution also reduces the latency :

  • 1456x1088 : > 35ms
  • 1280x720 : 26ms → 30ms
  • 728x544 : 10ms → 15ms

I did some research in the forum and found that there’s several results :

48ms to 60ms : Latency Analysis with LibArgus
Avg. 16ms measured with PerfTracker : Low-Latency CSI Camera Stream - #4 by ShaneCCC

And some worse results with glass-to-glass testings.

I would like to know if such high latency is normal with Lib Argus API, please?

Best Regards,
Khang

Suppose the frame rate is the key point for the latency instead of resolution.
For the G2G latency is about 4-6 frames is normal.

Thanks

Hi @ShaneCCC,

Thanks for your reply. However, we would not stream the videos out but process (fuse them with IMU and other data) inside the Jetson. In terms of timestamp, we would like the frames to be as close to the external triggering signal as possible. Is there anyway to reduce the latency, please ?

Best Regards,
Khang

If you have critical request for the latency I would suggest using v4l2 API instead of argus due to argus have many pipeline that would impact the latency.

Hi @ShaneCCC,

Should it be the following example : /usr/src/jetson_multimedia_api/samples/12_v4l2_camera_cuda/ or /usr/src/jetson_multimedia_api/samples/18_v4l2_camera_cuda_rgb/? Also, should I change camera node in the device-tree from

                                        csi_pixel_bit_depth = "10";
                                        mode_type = "bayer";
                                        pixel_phase = "rggb";

to

					mode_type = "grey";
					pixel_phase = "y";
					csi_pixel_bit_depth = "8";

in order to be able to capture raw data?

Best Regards,
Khang

I don’t think so, if your sensor is output bayer pattern.
You may need implement software debayer or use YUV sensor instead of bayer sensor.

Hi @ShaneCCC,

I do not need YUV output but RGB/RGBA and eventually grayscale as the sensor is monochrome. Is it possible to do with GPU (CUDA) only and to bypass the ISP (Libargus) ?

If bayer raw format you can have CUDA for software demosaic.

Hi @ShaneCCC,

I don’t think so, if your sensor is output bayer pattern.

Did it mean that I still captured RAW Bayer (RG10) but not necessarily Grayscale (Y10) via V4L2 path ?

Yes, and use software debayer for it.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.