We are developing a multiple-camera system based on Nvidia Jetson Xavier NX. So far we can guarantee the capturing synchronization by hardware mechanism :
Shared MCLK,
Slave sensor(s) is / are triggered by master sensor (XHS and XVS for the case of Sony IMX sensors),
However, we experienced with a platform other that Nvidia Jetsons that each sensor is controlled by a separate instance of the ISP. Thus, the sensors do not have same settings of gain, exposure and white-balance. This could be optimal for the case of multi-channel AI/deep learning camera but for the case of stereo camera, this might result wrong calculation based on the stereo pair as they potentially have different settings. In my opinion, the sensors’ registers should be programed with the values applied to the master.
Therefore, I would like to know if there’s any solution with Nvidia Jetson platforms to solve this issue, please ?
may I know the Jetpack release version you’re working with.
and… how many camera streams of your camera solutions.
you may download MMAPI, such as $ sudo apt install nvidia-l4t-jetson-multimedia-api for Argus samples, there’re… syncSensor, and syncStereo, which demonstrate the use-case with multi-sources per single capture session.
these two sample apps were created a single capture session with dual camera, and taking 1st camera settings as master to apply both of the stream.
however,
there’s limitation of multi-sources in single capture session, you’ll see some abnormal issues by initial more than 3 cameras in single capture session.
may I know the Jetpack release version you’re working with.
and… how many camera streams of your camera solutions.
For Jetson Nano, we would like to use the most recent Jetpack release that it is still supported : Jetpack-4.6.3 (L4T-R32.7.3) and 2 synchronized cameras is acceptable. More synchronized cameras would be better.
For Jetson Xavier NX, we would like to use Jetpack-5.1 (L4T-R35.2.1) with which our custom camera driver is aligned. From 2 up to 6 synchronized cameras for different application use-cases is expected.
you may download MMAPI, such as $ sudo apt install nvidia-l4t-jetson-multimedia-api for Argus samples, there’re… syncSensor, and syncStereo, which demonstrate the use-case with multi-sources per single capture session.
these two sample apps were created a single capture session with dual camera, and taking 1st camera settings as master to apply both of the stream.
Thank you, I also saw these examples in other discussion. I will take a look at them.
however,
there’s limitation of multi-sources in single capture session, you’ll see some abnormal issues by initial more than 3 cameras in single capture session.
I saw from below discussions that synchronization with syncSenor method (or within single capture session) was only tested and working with only a pair of left/right cameras. So there’s small confusion about the max. number of cameras in a single capture session. Does it vary according to the platforms (Jetson Nano, Xavier NX, Xavier AGX, Orin, …) and/or Jetpack releases? Could you confirm?
ya… those topic you’ve quote was abnormal issue when increasing the number of cameras.
to clarify,
I’ve also acknowledge the max number of cameras in a single capture session from those topics.
internally, we’ve only single stereo pair (i.e. left/right to be in sync) for verification; we’ve not tested more than 2-cam for synchronized capture at the moment.
Does it mean that you only tested the sync (both hardware and software with syncSensor for single capture session) for a pair of sensors which is the only available hardware that you have?
And you confirmed that 3 cameras in single capture session works fine but 4 or more cameras in single capture session could not work?