we are building a Jetson Nano based system with a custom Camera/Framegrabber. Due to a couple requirements, a so called single-shot mode is must for our target scenarios.
It means that the Framegrabber puts the camera sensor into the sleep mode till the external trigger is not come. When this external signal comes. framegrabber reads out the sensors’ data and sends image lines over CSI.
The CSI interface is initialized beforehand, a number of the frames is set via v4l2 tool.
The phenomenon we faced with is a need to “request” (at the Jetson Nano side) +3 (or+4) frames more than we expect to receive.
On the other hand, if we want to receive 1 frame (single shot mode for the camera itself), then we need to set 4 (or sometimes 5) frames in Jetson Nano video input chain. And only the 1st frame is stored (or transferred to a sink); other frames are abandoned somewhere. In this case the framegrabber must send those 4 frames (instead of 1), and the frames #2-4 are disappeared (get stuck in the SoC DMA controller?)
Maybe, guys, somebody knows the exact requirements and limitations for such scenarios (single frames, or the short series of frames). Why do we need to superadd the actual data (frames) with 3-4 dummy frames following the target ones?
Thanks in advance,