I am using a Leopard Imaging setup with hardware-synchronized cameras, and I want to obtain the synchronized images in software for processing. I have been using Argus, and noticed that both the 13_multi_camera and argus_camera multisensor examples are not perfectly synchronized. For example, device #0 and device #5 are always off by one or two frames. However, running two separate argus_camera instances with different devices gives decent synchronization.
Within my own code, putting multiple camera devices into a single capture session results in frames that are not synchronized, even though this is what has been suggested on other forum posts (https://devtalk.nvidia.com/default/topic/1039183/argus-syncing-multiple-capture-sessions/). Creating two separate capture sessions, one for each device, results in much better synchronization. Why is this the case? From other discussions, putting multiple devices into one capture session seems like the logical way to ensure software synchronization, but it is actually making it worse. What is the use case of multiple devices in one capture session? Is there any information about what is happening in Argus that would cause this behavior?