Jetson Xavier NX /
Jetpack 5.1.2 /
DeepStream 6.2 /
based on gst-nvdsvideotemplate
Hi,
I wans to implement a time synchronisation logic by droping or duplicating(padding).
Already I can analyse the synch difference in frame level. (let’s say the time-stamp is unstable in the input files)
but I’m not sure how I can drop or re-use the previous frame in specific stream(or sink_%).
for example, in the above sceanario, using my custom plug-in, I can detect the time sync difference [0, 3, 2] using some frames of first few seconds.
and now I want to either drop 0, 3, 2 frames of each sink_%, or re-use 3, 0, 1 frames of each sink_%.
to do so,
I guess I need to signal to nvstreammux so that somehow it can drop 3 frames in sink_1, 2 frames in sink_2.
or in sync_adjust plugin, make queue which has size of synch difference.
but both idea seems a bit complicated or not straight forward.
Could you give me some advise how to drop or re-use the frames?
No. nvstreammux has implemented its own synchronization algorithm, you can’t destroy the internal algorithm from outside. And it conflict with the purpose of nvstreammux.
What is your purpose of “dropping” or “copy” the frames? The DeepStream works with batches but not frames. How will you handle batches with the “dropping” or “copying” frames?
general explanation
Let’s say, I have three video input as shown my diagram, and they have visually overapping regions(and moving objects) so I can analyse the time difference between videos(and it is already implemented in my custom plug-in, sync-adjust).
I hope the image is visible in my post above, for example, sink_1 has three more earlier frames in the beginning, and sink_2 has two more than sink_0.
why drop or re-use the frame?
so the goal is to make a timely synchronised output videos, from the different starting time input videos where the timestamps are not perfectly reliable (for example each video were captured with the different ntp source)
Your operation will generate new frames or missing some frames comparing to the original videos. While you apply nvstreammux with the original videos to generate batches. Please apply your synchronization algorithm before using nvstreammux.
Since we don’t know what kind of synchronization you will do, what the purpose is, we don’t have any idea.
Is your purpose to convert videos to the same framerates? If it is so, the videorate (gstreamer.freedesktop.org) may help you.
If you have your own special synchronization logic, you can develop your own multiple sink pads and multiple src pads GStreamer plugin to accept multiple streams and output multiple new streams.
It is not such videorate ect,
when there are multiple videos which captured same scene but different starting time / or with some frame drops in the middle of the video.
I want to dynamically make these videos in time-synced.
Anyways, multiple sink / src would help this frame unit manipulation.