Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU)**AGX Xavier • DeepStream Version 5.0 • JetPack Version (valid for Jetson only) 4.4
I try to create a plugin (s1plugin) to consume synchronized two camera inputs. Could anyone provide comment/suggestion whether the following pipeline make sense or not?
Does it make sense? in this case how do I guarantee video0 and video1 frames are in sync when consumed by s1plugin if mux in the pipeline several elements before?
The nvstreammux only handle inference related information, it will not change the video timestamp. What is your sync plugin for? To align the two streams timeline? Will it drop frames when the time gap between the two streams is too big? Do you want to sync the videos first before inference?
the plugin is for computing stereo information from synchronized left and right cameras. we have two choices:
to have 60 fps async mode of left/right camera to rely on s/w sync (in this case we can drop frame to sync up the left and right frames as long as we meet 30 fps real time criteria) or
to have 30 fps sync mode of left/right camera (in this case we cannot drop frame)
So what would be the appropriate pipeline for case 1) and case 2) respectively?