Inference Branch – performs three separate inference tasks (as shown at the bottom of the diagram).
The Problem:
I’m currently facing an issue where I cannot reliably extract the timestamp for the same frame (global timestamp) across the H265 and Inference branches.
In the Inference branch, I can easily access the timestamp via the ntp_timestamp field of NvDsFrameMeta, which is very convenient. I can also retrieve the frame_num for precise frame identification.
However, in the H265 and H264 branches, these frames do not pass through an inference element, and therefore, I do not have access to the NvDsFrameMeta metadata that contains ntp_timestamp or frame_num. All I have at that stage is the GstBuffer, which only provides pts and dts. These do not provide information about UNIX timestamp but rather time in nanoseconds relative to start of the pipeline. Also it does not directly map to frame numbers or real-time timestamps in nanoseconds.
My Question:
How can I extract accurate timing information in the non-inference branches (e.g., H265)?
Is there a DeepStream plugin or element that could help propagate or attach NvDsFrameMeta (or similar) to these branches so that timestamps are accessible globally for the same frame? Would adding a nvstreammux (or a similar metadata propagating element) in those branches allow me to retrieve the same metadata as in the inference branch?
PS
I cannot save H265 files after the inference because i need them in 4k resolution.
Any guidance of how to solve it would be greatly appreciated. Thanks.
You can use the PTS (presentation timestamp) from the GstBuffer to synchronize frames across branches, even if you don’t have NvDsFrameMeta. While PTS is relative to the pipeline start, it remains consistent across branches, allowing you to match frames.
This gives you the frame timestamp in nanoseconds. You can use it as a proxy to align frames between the H265 and inference branches.
We are also working on a GStreamer element for synchronization, but it is in its early stages and there is no release available yet. It functions as a multiqueue with buffer synchronization based on timestamps. The wiki page is bare-bones at the moment, but you can check it out later if you are interested: GStreamer Buffer Synchronization
2.interpipesink is a subclass of appsink, Generally speaking, you can use PTS(GST_BUFFER_PTS(buffer)) as reference clock. But interpipesink is not provided by DeepStream, For more questions, please consult ridgerun.
This seems to be another problem, but I think it is due to the pipeline. Please share your goals to determine whether the pipeline can be optimized.
My goal is to maintain a consistent global timestamp across all branches of the pipeline for each frame, particularly across the inference and H.265 recording branches. I want to ensure that frame N carries the same timestamp - UNIX epoch time in nanoseconds in all branches, regardless of processing time or delays.
Current Challenges:
Timestamp Consistency: Here is my interpipesink and interpipesrc configuration:
With this setup, the PTS gets reset per interpipesrc, which prevents me from propagating a custom global timestamp (like from time.time_ns()). Using PTS directly isn’t ideal either, as it represents time relative to the pipeline start, not an absolute/global time which i what I need - 1750747123 UNIX timestamp. I previously tried generating a global timestamp just before loop.run(), but it was off by ~20ms from the actual pipeline start.
Inference-Induced Delays: One of my inference branches takes ~100ms per frame (including inference + post-processing). This creates unintended delays in other branches, including H.265 recording. My expectation was that interpipesink/interpipesrc would isolate branches and allow asynchronous execution. However, I still observe delays in the H.265 branch during inference, leading to inconsistent frame intervals. For instance, a frame that should be saved every second ends up irregularly timed once the inference branch is active. Disabling the inference branch eliminates this issue.
Why Not tee: I considered using tee, but it doesn’t solve the issue. tee shares buffer references across branches. Hence, a delay in one branch (e.g., inference) propagates to all others, including H.265 encoding.
Summary:
What I need is a way to:
Assign and propagate a global, frame-specific timestamp (specifically UNIX time in nanoseconds for every frame) to every branch.
Ensure that the same frame (e.g., frame 100) carries the same timestamp across all branches: inference, H.265, H.264, inference
Isolate branches so delays in the inference pipeline do not affect the others.
Is there a recommended DeepStream-compatible way to propagate a custom timestamp across multiple branches while avoiding shared buffer dependency and timing desynchronization? Any advice on structuring such a pipeline efficiently would be appreciated.
Let me know if this is clear or further explanation should be provided.
I’ve encountered another issue related to PTS (inconsistency across the pipeline, particularly when using a heavier inference model.
Context:
I’m attaching a probe function to the nvinfer element to monitor the PTS values during inference. However, the values are inconsistent when using a model that takes around 100 ms per frame for inference and postprocessing.
Configuration:
Stream source: 60 FPS (≈16.6 ms per frame)
Model batch size: 1
Interval: Controlled via nvinfer config
Observations:
When the interval is set to 0, meaning inference is run on every frame, the difference between PTS values of consecutive frames at the inference element is around 35–40 ms, instead of the expected ~16.6 ms.Here’s a screenshot of the PTS differences:
When I set interval=10, this delay occurs every 10th frame as expected given the batch size of 1. However it is smaller because about 25 ms but still occurs. The strange thing that occurs later is that another difference between pts is much smaller than 16 (e.g. 10). Does it mean it compensate the delay it previously encountered?
This means that PTS is not reliable for timing when using models that take longer than the frame interval to infer (i.e., >16.6 ms for 60 FPS). The inference-induced delay appears to accumulate and propagate, affecting the timestamps, which defeats the purpose of using PTS as a frame-level synchronization reference.
Question:
How can I rely on PTS values when using heavier models that introduce delays into the pipeline?
Is there a way to maintain accurate, frame-level synchronization or compensate for inference time so PTS remains meaningful and consistent across the pipeline?
1.I think multi-pipelines is helpless for this issue, This is a performance issue, and multiple pipelines will not improve it.
2.For the first pipeline, is the PTS interval output by nvarguscamerasrc strictly 16.67ms?
3.Will interpipesink and interpipesrc modify the PTS of GstBuffer? If so, deepstream cannot solve this problem, which is why I do not recommend using multi-pipelines.
4. If nvarguscamerasrc can strictly output 60fps video frames, then even if nvinfer takes a long time (e.g. 100ms), although nveglglessink cannot guarantee 60fps, if there is no frame loss, the encoded output file (h264/h265) will still be 60fps. When playing back, the PTS is strictly set to 16.67ms, so 60fps is still guaranteed.
Try this pipeline, add a probe function at queue src pad, monitor the PTS of GstBuffer.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks