Hello,
I am using a Leopard Imaging setup with hardware-synchronized cameras, and I want to obtain the synchronized images in software for processing. I have been using Argus, and noticed that both the 13_multi_camera and argus_camera multisensor examples are not perfectly synchronized. For example, device #0 and device #5 are always off by one or two frames. However, running two separate argus_camera instances with different devices gives decent synchronization.
Within my own code, putting multiple camera devices into a single capture session results in frames that are not synchronized, even though this is what has been suggested on other forum posts (Argus - Syncing multiple capture sessions - Jetson TX2 - NVIDIA Developer Forums). Creating two separate capture sessions, one for each device, results in much better synchronization. Why is this the case? From other discussions, putting multiple devices into one capture session seems like the logical way to ensure software synchronization, but it is actually making it worse. What is the use case of multiple devices in one capture session? Is there any information about what is happening in Argus that would cause this behavior?
Thank you.
Did you check try the …/tegra_multimedia_api/argus/samples/syncSensor and got worse?
This sample should provide for this kind of use case.
That sample only shows the KL distance between the images, but does not actually verify the synchronization. I have tried to use similar code, in that multiple camera devices are put into the same CaptureSession. However, when putting the cameras into the same CaptureSession, I can see that the synchronization is not correct, while it is much better in separate capture sessions. I am confused why a method that is specifically designed for hardware-synchronized cameras returns the images even more out of sync.
hello edexheim,
syncSensor application emulate one software capture requests to trigger multiple camera sensors shutter events, that’s why it would usually generate better (close) capture frames.
since you’re enable hardware frame-sync pin, you might also have sensor timestamp comparison from user-space to achieve synchronization.
BTW,
may I know what’s your use-case for synchronized captures.
please also check similar discussion thread, Topic 1061467, and Topic 1007933 for reference,
thanks
Hello JerryChang,
We are using the cameras for stereo mapping, so the cameras need to be synchronized. Since we are planning on using 6 cameras, as long as there is no significant performance hit when using separate captures, it should be sufficient.
Thank you.