Jetson Nano running at low frame rate with CSI sensor

I have an application where I want to run CSI sensors at low frame rate. When using gstreamer and nvarguscamerasrc, I can set the frame rate for the imx219 sensor down to 2 fps. An application that directly opens the device and uses ioctl and VIDIOC_DQBUF to pull buffers works at 5 fps or high but fails to dequeue buffers when the frame rate is less than 5 fps. I get the same behavior whether the device is opened in blocking or non-blocking mode.

I run the argus_camera without problem to set the frame rate to 2fps.
Suggest reference to MMAPI to implement it.

 argus_camera --framerate=2 --kpi
Executing Argus Sample Application (argus_camera)
Argus Version: 0.98.3 (multi-process)
PerfTracker: app initial 595 ms
PerfTracker 1: app intialized to task start 570 ms
PerfTracker 1: task start to issue capture 48 ms
PerfTracker 1: first request 748 ms
PerfTracker 1: total launch time 1963 ms
PerfTracker 1: frameRate 2.01 frames per second at 0 Seconds
PerfTracker: display frame rate 2.05 frames per second
PerfTracker 1: framedrop current request 0, total 0
PerfTracker 1: latency 92 ms average, min 90 max 94
PerfTracker: display frame rate 2.00 frames per second
PerfTracker: display frame rate 2.00 frames per second
PerfTracker 1: flush takes 1698 ms
PerfTracker: display frame rate 2.00 frames per second

The interface that argus_camera uses is significantly different so somewhat difficult to change the existing application.

The ioctl and VIDIOC_DQBUF method is used in the “12_camera_v4l2_cuda” and “v4l2cuda” samples so I assume that this method is still somewhat recommended. These samples exhibit the same problem when the frame rate is below 5 fps. Any suggestion on buffering issue through this interface?

My application requires simultaneous capture on three cameras with 12-bit bayer format. The required processing prevents me from directly using the ISP.

If I need to rewrite the front end of the acquisition to this different method, I have a few questions:
Does this API attempt to use the ISP for each camera (which would be a problem for three cameras)?
Any problems with getting the raw 12-bit data?
Does this API have the same 3 image ring buffer issue mentioned in The timestamp of vi5 capture dqueue is always two frames later than capture enqueue.