Using global timestamp across all elements in DeepStream Pipeline (even non-inference ones)

• Hardware Platform (Jetson / GPU) : NVIDIA Jetson AGX Orin
• DeepStream Version : 7.1
• JetPack Version (valid for Jetson only) : 6.1
• TensorRT Version : 8.6.2.3
• Issue Type( questions, new requirements, bugs) : question
Hello everyone,

I’m working with a Python-based DeepStream pipeline that looks like this:

The pipeline consists of three separate branches:

  • H264 Branch – saves encoded H.264 files.
  • H265 Branch – saves encoded HEVC files.
  • Inference Branch – performs three separate inference tasks (as shown at the bottom of the diagram).

The Problem:

I’m currently facing an issue where I cannot reliably extract the timestamp for the same frame (global timestamp) across the H265 and Inference branches.

In the Inference branch, I can easily access the timestamp via the ntp_timestamp field of NvDsFrameMeta, which is very convenient. I can also retrieve the frame_num for precise frame identification.

However, in the H265 and H264 branches, these frames do not pass through an inference element, and therefore, I do not have access to the NvDsFrameMeta metadata that contains ntp_timestamp or frame_num. All I have at that stage is the GstBuffer, which only provides pts and dts. These do not provide information about UNIX timestamp but rather time in nanoseconds relative to start of the pipeline. Also it does not directly map to frame numbers or real-time timestamps in nanoseconds.

My Question:

How can I extract accurate timing information in the non-inference branches (e.g., H265)?

Is there a DeepStream plugin or element that could help propagate or attach NvDsFrameMeta (or similar) to these branches so that timestamps are accessible globally for the same frame? Would adding a nvstreammux (or a similar metadata propagating element) in those branches allow me to retrieve the same metadata as in the inference branch?

PS

I cannot save H265 files after the inference because i need them in 4k resolution.

Any guidance of how to solve it would be greatly appreciated. Thanks.

Hi,

You can use the PTS (presentation timestamp) from the GstBuffer to synchronize frames across branches, even if you don’t have NvDsFrameMeta. While PTS is relative to the pipeline start, it remains consistent across branches, allowing you to match frames.

Here’s how to extract it:

GstClockTime pts = GST_BUFFER_PTS(buffer);
g_print("PTS: %" GST_TIME_FORMAT "\n", GST_TIME_ARGS(pts));

This gives you the frame timestamp in nanoseconds. You can use it as a proxy to align frames between the H265 and inference branches.

We are also working on a GStreamer element for synchronization, but it is in its early stages and there is no release available yet. It functions as a multiqueue with buffer synchronization based on timestamps. The wiki page is bare-bones at the moment, but you can check it out later if you are interested: GStreamer Buffer Synchronization

1.Can you share your goals for doing this? I think the following pipeline is simpler and avoids pipeline synchronization issues.

argus --> tee --> | --> h264 enc branch
                  | --> h265 enc branch
                  | --> nvstreammux --> nvinfer --> 

2.interpipesink is a subclass of appsink, Generally speaking, you can use PTS(GST_BUFFER_PTS(buffer)) as reference clock. But interpipesink is not provided by DeepStream, For more questions, please consult ridgerun.

This seems to be another problem, but I think it is due to the pipeline. Please share your goals to determine whether the pipeline can be optimized.

Thanks @miguel.taylor and @junshengy for your responses.

My goal is to maintain a consistent global timestamp across all branches of the pipeline for each frame, particularly across the inference and H.265 recording branches. I want to ensure that frame N carries the same timestamp - UNIX epoch time in nanoseconds in all branches, regardless of processing time or delays.

Current Challenges:

  1. Timestamp Consistency: Here is my interpipesink and interpipesrc configuration:
main_interpipesink = create_pipeline_element(
    "interpipesink",
    "main-interpipesink",
    "Main Interpipesink",
    self.logger,
)
main_interpipesink.set_property("name", "main-interpipesink")
main_interpipesink.set_property("async", False)
main_interpipesink.set_property("sync", False)
main_interpipesink.set_property("forward-events", True)
main_interpipesink.set_property("forward-eos", True)

h264_interpipesrc = create_pipeline_element(
    "interpipesrc", "h264-interpipesrc", "H264 Interpipesrc", self.logger
)
h264_interpipesrc.set_property("listen-to", "main-interpipesink")
h264_interpipesrc.set_property("is-live", True)
h264_interpipesrc.set_property("stream-sync", 0)
  1. With this setup, the PTS gets reset per interpipesrc, which prevents me from propagating a custom global timestamp (like from time.time_ns()). Using PTS directly isn’t ideal either, as it represents time relative to the pipeline start, not an absolute/global time which i what I need - 1750747123 UNIX timestamp. I previously tried generating a global timestamp just before loop.run(), but it was off by ~20ms from the actual pipeline start.
  2. Inference-Induced Delays: One of my inference branches takes ~100ms per frame (including inference + post-processing). This creates unintended delays in other branches, including H.265 recording. My expectation was that interpipesink/interpipesrc would isolate branches and allow asynchronous execution. However, I still observe delays in the H.265 branch during inference, leading to inconsistent frame intervals. For instance, a frame that should be saved every second ends up irregularly timed once the inference branch is active. Disabling the inference branch eliminates this issue.
  3. Why Not tee: I considered using tee, but it doesn’t solve the issue. tee shares buffer references across branches. Hence, a delay in one branch (e.g., inference) propagates to all others, including H.265 encoding.

Summary:

What I need is a way to:

  • Assign and propagate a global, frame-specific timestamp (specifically UNIX time in nanoseconds for every frame) to every branch.
  • Ensure that the same frame (e.g., frame 100) carries the same timestamp across all branches: inference, H.265, H.264, inference
  • Isolate branches so delays in the inference pipeline do not affect the others.

Is there a recommended DeepStream-compatible way to propagate a custom timestamp across multiple branches while avoiding shared buffer dependency and timing desynchronization? Any advice on structuring such a pipeline efficiently would be appreciated.
Let me know if this is clear or further explanation should be provided.

I’ve encountered another issue related to PTS (inconsistency across the pipeline, particularly when using a heavier inference model.

Context:

I’m attaching a probe function to the nvinfer element to monitor the PTS values during inference. However, the values are inconsistent when using a model that takes around 100 ms per frame for inference and postprocessing.

Configuration:

  • Stream source: 60 FPS (≈16.6 ms per frame)
  • Model batch size: 1
  • Interval: Controlled via nvinfer config

Observations:

  • When the interval is set to 0, meaning inference is run on every frame, the difference between PTS values of consecutive frames at the inference element is around 35–40 ms, instead of the expected ~16.6 ms.Here’s a screenshot of the PTS differences:

  • When I set interval=10, this delay occurs every 10th frame as expected given the batch size of 1. However it is smaller because about 25 ms but still occurs. The strange thing that occurs later is that another difference between pts is much smaller than 16 (e.g. 10). Does it mean it compensate the delay it previously encountered?

Problem:

This means that PTS is not reliable for timing when using models that take longer than the frame interval to infer (i.e., >16.6 ms for 60 FPS). The inference-induced delay appears to accumulate and propagate, affecting the timestamps, which defeats the purpose of using PTS as a frame-level synchronization reference.


Question:

How can I rely on PTS values when using heavier models that introduce delays into the pipeline?

Is there a way to maintain accurate, frame-level synchronization or compensate for inference time so PTS remains meaningful and consistent across the pipeline?