• Hardware Platform (Jetson / GPU) : NVIDIA Jetson AGX Orin
• DeepStream Version : 7.1
• JetPack Version (valid for Jetson only) : 6.1
• TensorRT Version : 8.6.2.3
• Issue Type( questions, new requirements, bugs) : question
Hello,
I’m encountering a challenge while saving H.265 (HEVC) video fragments in a specific branch of my DeepStream pipeline. In my pipeline, I have a branch that looks like this:
This branch is responsible for saving .hevc files every 60 frames. However, there is no inference happening here, so frame_meta.ntp_timestamp is unavailable.
To approximate a global time reference, I currently:
- Set a pipeline_start_time before running the pipeline:
self.logger.info("----- Starting pipeline -----")
# Start pipeline
self.pipeline.set_state(Gst.State.PLAYING)
self.pipeline_start_time = int(time.time_ns() / 1_000_000)
self.loop.run()
- Later, when saving files, I approximate the timestamp as pipeline_start_time + buffer PTS.
self.pipeline_start_time + gst_buffer.pts
Issue:
This workaround is inaccurate.
There’s a non-deterministic delay (around 20–30 ms) between setting pipeline_start_time and actual playback starting, causing a timestamp shift of about 2–3 frames.
Thus, the timestamps of the saved HEVC files are not precisely correct.
Question:
- Is there a way to retrieve a real UNIX timestamp directly from the GstBuffer without needing inference metadata?
- Is there a hidden or standard property (besides pts/dts) that provides real-time info at the buffer level?
- Or is there a better method to “synchronize” the system clock with the buffer timestamps accurately?
I want to avoid relying on a global pipeline start time and make the saved file timestamps truly accurate, similar to how ntp_timestamp is available in inference frame_meta.
Current appsink Callback Code:
Here’s my current implementation for appsink:
def on_new_hevc_sample(
appsink: Gst.Element,
hevc_probe_data: dict,
logger: logging.Logger,
fps_stats: dict | None = None,
) -> Gst.FlowReturn:
"""
Callback function for appsink new-sample signal.
Handles HEVC samples and saves them to files.
"""
# Get the sample from appsink
sample = appsink.emit("pull-sample")
if not sample:
logger.error("Unable to get sample from hevc appsink")
return Gst.FlowReturn.ERROR
# Get buffer from sample
gst_buffer = sample.get_buffer()
if not gst_buffer:
logger.error("Unable to get GstBuffer from hevc sample")
return Gst.FlowReturn.ERROR
# Increment frame counter
hevc_probe_data["hevc_frame_counter"] += 1
# Store PTS of first frame in the chunk if this is the first frame after reset
if hevc_probe_data["hevc_frame_counter"] % 60 == 1:
hevc_probe_data["first_frame_pts"] = gst_buffer.pts
# Get buffer data
buffer_data = gst_buffer.extract_dup(0, gst_buffer.get_size())
# Add buffer data to memory chunks
hevc_probe_data["memory_chunks"].append(buffer_data)
# Check if we need to write to file (every 60 frames)
if hevc_probe_data["hevc_frame_counter"] % 60 == 0:
# Construct filename
filename = f"{hevc_probe_data['dirs']['temp']}/v_temp{hevc_probe_data['hevc_file_index']:06d}.hevc"
# Write all chunks at once
try:
with open(filename, "wb") as f:
# Write all accumulated chunks in one operation
for chunk in hevc_probe_data["memory_chunks"]:
f.write(chunk)
# Create a structure with file information
structure = Gst.Structure.new_empty("custom-h265-fragment-closed")
structure.set_value("location", filename)
# Use the first frame's PTS instead of the last frame
structure.set_value(
"hevc-pts", str(int(hevc_probe_data["first_frame_pts"] / 1_000_000))
)
# Create and post the message
appsink.post_message(Gst.Message.new_element(appsink, structure))
# Reset for next file
hevc_probe_data["hevc_file_index"] += 1
hevc_probe_data["memory_chunks"] = []
hevc_probe_data["first_frame_pts"] = None
except IOError as e:
logger.error(f"Failed to write video buffer to {filename}: {e}")
return Gst.FlowReturn.OK
Additional Thoughts:
- I’ve reviewed the buffer metadata but only found pts, dts, and buffer duration—none of them maps directly to an absolute UNIX timestamp.
- The HEVC branch does not interact with nvinfer or similar components, so ntp_timestamp from DeepStream is not available.
- I considered using clock time via Gst.Clock from the pipeline, but unsure how reliable it is vs. buffer timestamps.
Final Request:
Any tips, suggestions, or pointers on:
- How to accurately align system time with buffer PTS ,
- Extracting real-time timestamps at the buffer level,
- Handling timestamp correction in pure GStreamer pipelines without inference?
Thanks a lot for your help and for reading!



