NTP-Timestamps of Dahua-Type Cameras Are Not Read Correctly

Setup

• Hardware Platform (GPU) NVIDIA RTX 6000 Ada
• DeepStream Version 7.1
• TensorRT Version 10.3.0.26
• NVIDIA GPU Driver Version (valid for GPU only) 535.247.01
• Issue Type( questions, new requirements, bugs) Question

Hi,

in our production environment we use two types of IP cameras:

  • AXIS P3265-LVE
  • DAHUA HDBW3841R-ZS-S2

Within our DeepStream pipeline, we want to extract the NTP timestamps provided by the RTCP Sender Reports using the ntp_timestamp-property of NvDsFrameMeta objects. When running a slightly modified version of deepstream-test3 (in Python, see below), we observed that the NTP timestamps of the AXIS cameras are correctly extracted, whilst the NTP timestamps of the DAHUA cameras always result as 0.

Both cameras periodically transmit the RTCP Sender Reports containing the NTP timestamps.

Exemplary AXIS RTCP Sender Report (sent approx. every 5 seconds).

Frame 475: 122 bytes on wire (976 bits), 122 bytes captured (976 bits) on interface enp1s0, id 0
Internet Protocol Version 4, Src: 172.22.71.20, Dst: 172.22.80.12
User Datagram Protocol, Src Port: 50001, Dst Port: 58215
Real-time Transport Control Protocol (Sender Report)
    [Stream setup by RTSP (frame 16)]
    10.. .... = Version: RFC 1889 Version (2)
    ..0. .... = Padding: False
    ...0 0000 = Reception report count: 0
    Packet type: Sender Report (200)
    Length: 6 (28 bytes)
    Sender SSRC: 0x0fe470ff (266629375)
    Timestamp, MSW: 3955684929 (0xebc6f641)
    Timestamp, LSW: 2938071163 (0xaf1f687b)
    [MSW and LSW as NTP timestamp: May  8, 2025 09:22:09.684072999 UTC]
    RTP timestamp: 2433433613
    Sender's packet count: 444
    Sender's octet count: 363357
Real-time Transport Control Protocol (Source description)
    [Stream setup by RTSP (frame 16)]
    10.. .... = Version: RFC 1889 Version (2)
    ..0. .... = Padding: False
    ...0 0001 = Source count: 1
    Packet type: Source description (202)
    Length: 12 (52 bytes)
    Chunk 1, SSRC/CSRC 0xFE470FF
    [RTCP frame length check: OK - 80 bytes]

Exemplary pipeline output for AXIS cameras.

Frame Number= 150 Number of Objects= 1 Vehicle_count= 1 Person_count= 0
RTSP Timestamp: 2025-05-08 12:21:40

Exemplary DAHUA RTCP Sender Report (sent approx. every 5 seconds).

Frame 19457: 86 bytes on wire (688 bits), 86 bytes captured (688 bits) on interface enp1s0, id 0
Internet Protocol Version 4, Src: 172.22.71.30, Dst: 172.22.80.12
User Datagram Protocol, Src Port: 26079, Dst Port: 58843
Real-time Transport Control Protocol (Sender Report)
    [Stream setup by RTSP (frame 15)]
    10.. .... = Version: RFC 1889 Version (2)
    ..0. .... = Padding: False
    ...0 0000 = Reception report count: 0
    Packet type: Sender Report (200)
    Length: 6 (28 bytes)
    Sender SSRC: 0x00000000 (0)
    Timestamp, MSW: 3955685193 (0xebc6f749)
    Timestamp, LSW: 317827579 (0x12f1a9fb)
    [MSW and LSW as NTP timestamp: May  8, 2025 09:26:33.073999999 UTC]
    RTP timestamp: 872153178
    Sender's packet count: 38550
    Sender's octet count: 54453171
Real-time Transport Control Protocol (Source description)
    [Stream setup by RTSP (frame 15)]
    10.. .... = Version: RFC 1889 Version (2)
    ..0. .... = Padding: False
    ...0 0001 = Source count: 1
    Packet type: Source description (202)
    Length: 3 (16 bytes)
    Chunk 1, SSRC/CSRC 0x0
    [RTCP frame length check: OK - 44 bytes]

Exemplary pipeline output for DAHUA cameras.

Frame Number= 150 Number of Objects= 0 Vehicle_count= 0 Person_count= 0
RTSP Timestamp: 1970-01-01 00:00:00

Pipeline used for experiments.

#!/usr/bin/env python3

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2019-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

import sys
sys.path.append('../')
from pathlib import Path
from os import environ
import gi
import configparser
import argparse
gi.require_version('Gst', '1.0')
from gi.repository import GLib, Gst
from ctypes import *
import time
import sys
import math
import platform
from common.platform_info import PlatformInfo
from common.bus_call import bus_call
from common.FPS import PERF_DATA

import pyds

import datetime

no_display = False
silent = False
file_loop = False
perf_data = None
measure_latency = False

MAX_DISPLAY_LEN=64
PGIE_CLASS_ID_VEHICLE = 0
PGIE_CLASS_ID_BICYCLE = 1
PGIE_CLASS_ID_PERSON = 2
PGIE_CLASS_ID_ROADSIGN = 3
MUXER_OUTPUT_WIDTH=1920
MUXER_OUTPUT_HEIGHT=1080
MUXER_BATCH_TIMEOUT_USEC = 33000
TILED_OUTPUT_WIDTH=1280
TILED_OUTPUT_HEIGHT=720
GST_CAPS_FEATURES_NVMM="memory:NVMM"
OSD_PROCESS_MODE= 0
OSD_DISPLAY_TEXT= 1
pgie_classes_str= ["Vehicle", "TwoWheeler", "Person","RoadSign"]

# pgie_src_pad_buffer_probe  will extract metadata received on tiler sink pad
# and update params for drawing rectangle, object information etc.
def pgie_src_pad_buffer_probe(pad,info,u_data):
    frame_number=0
    num_rects=0
    got_fps = False
    gst_buffer = info.get_buffer()
    if not gst_buffer:
        print("Unable to get GstBuffer ")
        return
    # Retrieve batch metadata from the gst_buffer
    # Note that pyds.gst_buffer_get_nvds_batch_meta() expects the
    # C address of gst_buffer as input, which is obtained with hash(gst_buffer)

    # Enable latency measurement via probe if environment variable NVDS_ENABLE_LATENCY_MEASUREMENT=1 is set.
    # To enable component level latency measurement, please set environment variable
    # NVDS_ENABLE_COMPONENT_LATENCY_MEASUREMENT=1 in addition to the above.
    global measure_latency
    if measure_latency:
        num_sources_in_batch = pyds.nvds_measure_buffer_latency(hash(gst_buffer))
        if num_sources_in_batch == 0:
            print("Unable to get number of sources in GstBuffer for latency measurement")

    batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
    l_frame = batch_meta.frame_meta_list
    while l_frame is not None:
        try:
            # Note that l_frame.data needs a cast to pyds.NvDsFrameMeta
            # The casting is done by pyds.NvDsFrameMeta.cast()
            # The casting also keeps ownership of the underlying memory
            # in the C code, so the Python garbage collector will leave
            # it alone.
            frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
        except StopIteration:
            break

        ts = frame_meta.ntp_timestamp/1000000000 # Retrieve timestamp, put decimal in proper position for Unix format
        print("RTSP Timestamp:",datetime.datetime.utcfromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S')) # Convert timestamp to UTC

        frame_number=frame_meta.frame_num
        l_obj=frame_meta.obj_meta_list
        num_rects = frame_meta.num_obj_meta
        obj_counter = {
        PGIE_CLASS_ID_VEHICLE:0,
        PGIE_CLASS_ID_PERSON:0,
        PGIE_CLASS_ID_BICYCLE:0,
        PGIE_CLASS_ID_ROADSIGN:0
        }
        while l_obj is not None:
            try: 
                # Casting l_obj.data to pyds.NvDsObjectMeta
                obj_meta=pyds.NvDsObjectMeta.cast(l_obj.data)
            except StopIteration:
                break
            obj_counter[obj_meta.class_id] += 1
            try: 
                l_obj=l_obj.next
            except StopIteration:
                break
        if not silent:
            print("Frame Number=", frame_number, "Number of Objects=",num_rects,"Vehicle_count=",obj_counter[PGIE_CLASS_ID_VEHICLE],"Person_count=",obj_counter[PGIE_CLASS_ID_PERSON])

        # Update frame rate through this probe
        stream_index = "stream{0}".format(frame_meta.pad_index)
        global perf_data
        perf_data.update_fps(stream_index)

        try:
            l_frame=l_frame.next
        except StopIteration:
            break

    return Gst.PadProbeReturn.OK



def cb_newpad(decodebin, decoder_src_pad,data):
    print("In cb_newpad\n")
    caps=decoder_src_pad.get_current_caps()
    if not caps:
        caps = decoder_src_pad.query_caps()
    gststruct=caps.get_structure(0)
    gstname=gststruct.get_name()
    source_bin=data
    features=caps.get_features(0)

    # Need to check if the pad created by the decodebin is for video and not
    # audio.
    print("gstname=",gstname)
    if(gstname.find("video")!=-1):
        # Link the decodebin pad only if decodebin has picked nvidia
        # decoder plugin nvdec_*. We do this by checking if the pad caps contain
        # NVMM memory features.
        print("features=",features)
        if features.contains("memory:NVMM"):
            # Get the source bin ghost pad
            bin_ghost_pad=source_bin.get_static_pad("src")
            if not bin_ghost_pad.set_target(decoder_src_pad):
                sys.stderr.write("Failed to link decoder src pad to source bin ghost pad\n")
        else:
            sys.stderr.write(" Error: Decodebin did not pick nvidia decoder plugin.\n")

def decodebin_child_added(child_proxy,Object,name,user_data):
    print("Decodebin child added:", name, "\n")
    if(name.find("decodebin") != -1):
        Object.connect("child-added",decodebin_child_added,user_data)

    if "source" in name:
        source_element = child_proxy.get_by_name("source")
        if source_element.find_property('drop-on-latency') != None:
            Object.set_property("drop-on-latency", True)

    if name.find("source") != -1:
            pyds.configure_source_for_ntp_sync(hash(Object))


def create_source_bin(index,uri):
    print("Creating source bin")

    # Create a source GstBin to abstract this bin's content from the rest of the
    # pipeline
    bin_name="source-bin-%02d" %index
    print(bin_name)
    nbin=Gst.Bin.new(bin_name)
    if not nbin:
        sys.stderr.write(" Unable to create source bin \n")

    # Source element for reading from the uri.
    # We will use decodebin and let it figure out the container format of the
    # stream and the codec and plug the appropriate demux and decode plugins.
    if file_loop:
        # use nvurisrcbin to enable file-loop
        uri_decode_bin=Gst.ElementFactory.make("nvurisrcbin", "uri-decode-bin")
        uri_decode_bin.set_property("file-loop", 1)
        uri_decode_bin.set_property("cudadec-memtype", 0)
    else:
        uri_decode_bin=Gst.ElementFactory.make("uridecodebin", "uri-decode-bin")
    if not uri_decode_bin:
        sys.stderr.write(" Unable to create uri decode bin \n")
    # We set the input uri to the source element
    uri_decode_bin.set_property("uri",uri)
    # Connect to the "pad-added" signal of the decodebin which generates a
    # callback once a new pad for raw data has beed created by the decodebin
    uri_decode_bin.connect("pad-added",cb_newpad,nbin)
    uri_decode_bin.connect("child-added",decodebin_child_added,nbin)

    # We need to create a ghost pad for the source bin which will act as a proxy
    # for the video decoder src pad. The ghost pad will not have a target right
    # now. Once the decode bin creates the video decoder and generates the
    # cb_newpad callback, we will set the ghost pad target to the video decoder
    # src pad.
    Gst.Bin.add(nbin,uri_decode_bin)
    bin_pad=nbin.add_pad(Gst.GhostPad.new_no_target("src",Gst.PadDirection.SRC))
    if not bin_pad:
        sys.stderr.write(" Failed to add ghost pad in source bin \n")
        return None
    return nbin

def main(args, requested_pgie=None, config=None, disable_probe=False):
    global perf_data
    perf_data = PERF_DATA(len(args))

    number_sources=len(args)

    platform_info = PlatformInfo()
    # Standard GStreamer initialization
    Gst.init(None)

    # Create gstreamer elements */
    # Create Pipeline element that will form a connection of other elements
    print("Creating Pipeline \n ")
    pipeline = Gst.Pipeline()
    is_live = False

    if not pipeline:
        sys.stderr.write(" Unable to create Pipeline \n")
    print("Creating streamux \n ")

    # Create nvstreammux instance to form batches from one or more sources.
    streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
    if not streammux:
        sys.stderr.write(" Unable to create NvStreamMux \n")

    pipeline.add(streammux)
    for i in range(number_sources):
        print("Creating source_bin ",i," \n ")
        uri_name=args[i]
        if uri_name.find("rtsp://") == 0 :
            is_live = True
        source_bin=create_source_bin(i, uri_name)
        if not source_bin:
            sys.stderr.write("Unable to create source bin \n")
        pipeline.add(source_bin)
        padname="sink_%u" %i
        sinkpad= streammux.request_pad_simple(padname) 
        if not sinkpad:
            sys.stderr.write("Unable to create sink pad bin \n")
        srcpad=source_bin.get_static_pad("src")
        if not srcpad:
            sys.stderr.write("Unable to create src pad bin \n")
        srcpad.link(sinkpad)
    queue1=Gst.ElementFactory.make("queue","queue1")
    queue2=Gst.ElementFactory.make("queue","queue2")
    queue3=Gst.ElementFactory.make("queue","queue3")
    queue4=Gst.ElementFactory.make("queue","queue4")
    queue5=Gst.ElementFactory.make("queue","queue5")
    pipeline.add(queue1)
    pipeline.add(queue2)
    pipeline.add(queue3)
    pipeline.add(queue4)
    pipeline.add(queue5)

    nvdslogger = None

    print("Creating Pgie \n ")
    if requested_pgie != None and (requested_pgie == 'nvinferserver' or requested_pgie == 'nvinferserver-grpc') :
        pgie = Gst.ElementFactory.make("nvinferserver", "primary-inference")
    elif requested_pgie != None and requested_pgie == 'nvinfer':
        pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
    else:
        pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")

    if not pgie:
        sys.stderr.write(" Unable to create pgie :  %s\n" % requested_pgie)

    if disable_probe:
        # Use nvdslogger for perf measurement instead of probe function
        print ("Creating nvdslogger \n")
        nvdslogger = Gst.ElementFactory.make("nvdslogger", "nvdslogger")

    print("Creating tiler \n ")
    tiler=Gst.ElementFactory.make("nvmultistreamtiler", "nvtiler")
    if not tiler:
        sys.stderr.write(" Unable to create tiler \n")
    print("Creating nvvidconv \n ")
    nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
    if not nvvidconv:
        sys.stderr.write(" Unable to create nvvidconv \n")
    print("Creating nvosd \n ")
    nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
    if not nvosd:
        sys.stderr.write(" Unable to create nvosd \n")
    nvosd.set_property('process-mode',OSD_PROCESS_MODE)
    nvosd.set_property('display-text',OSD_DISPLAY_TEXT)

    if file_loop:
        if platform_info.is_integrated_gpu():
            # Set nvbuf-memory-type=4 for integrated gpu for file-loop (nvurisrcbin case)
            streammux.set_property('nvbuf-memory-type', 4)
        else:
            # Set nvbuf-memory-type=2 for x86 for file-loop (nvurisrcbin case)
            streammux.set_property('nvbuf-memory-type', 2)

    if no_display:
        print("Creating Fakesink \n")
        sink = Gst.ElementFactory.make("fakesink", "fakesink")
        sink.set_property('enable-last-sample', 0)
        sink.set_property('sync', 0)
    else:
        if platform_info.is_integrated_gpu():
            print("Creating nv3dsink \n")
            sink = Gst.ElementFactory.make("nv3dsink", "nv3d-sink")
            if not sink:
                sys.stderr.write(" Unable to create nv3dsink \n")
        else:
            if platform_info.is_platform_aarch64():
                print("Creating nv3dsink \n")
                sink = Gst.ElementFactory.make("nv3dsink", "nv3d-sink")
            else:
                print("Creating EGLSink \n")
                sink = Gst.ElementFactory.make("nveglglessink", "nvvideo-renderer")
            if not sink:
                sys.stderr.write(" Unable to create egl sink \n")

    if not sink:
        sys.stderr.write(" Unable to create sink element \n")

    if is_live:
        print("At least one of the sources is live")
        streammux.set_property('live-source', 1)

    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)
    streammux.set_property('batch-size', number_sources)
    streammux.set_property('batched-push-timeout', MUXER_BATCH_TIMEOUT_USEC)
    streammux.set_property("attach-sys-ts", 0)
    streammux.set_property("frame-duration", 0)

    if requested_pgie == "nvinferserver" and config != None:
        pgie.set_property('config-file-path', config)
    elif requested_pgie == "nvinferserver-grpc" and config != None:
        pgie.set_property('config-file-path', config)
    elif requested_pgie == "nvinfer" and config != None:
        pgie.set_property('config-file-path', config)
    else:
        pgie.set_property('config-file-path', "dstest3_pgie_config.txt")
    pgie_batch_size=pgie.get_property("batch-size")
    if(pgie_batch_size != number_sources):
        print("WARNING: Overriding infer-config batch-size",pgie_batch_size," with number of sources ", number_sources," \n")
        pgie.set_property("batch-size",number_sources)
    tiler_rows=int(math.sqrt(number_sources))
    tiler_columns=int(math.ceil((1.0*number_sources)/tiler_rows))
    tiler.set_property("rows",tiler_rows)
    tiler.set_property("columns",tiler_columns)
    tiler.set_property("width", TILED_OUTPUT_WIDTH)
    tiler.set_property("height", TILED_OUTPUT_HEIGHT)
    if platform_info.is_integrated_gpu():
        tiler.set_property("compute-hw", 2)
    else:
        tiler.set_property("compute-hw", 1)
    sink.set_property("qos",0)

    print("Adding elements to Pipeline \n")
    pipeline.add(pgie)
    if nvdslogger:
        pipeline.add(nvdslogger)
    pipeline.add(tiler)
    pipeline.add(nvvidconv)
    pipeline.add(nvosd)
    pipeline.add(sink)

    print("Linking elements in the Pipeline \n")
    streammux.link(queue1)
    queue1.link(pgie)
    pgie.link(queue2)
    if nvdslogger:
        queue2.link(nvdslogger)
        nvdslogger.link(tiler)
    else:
        queue2.link(tiler)
    tiler.link(queue3)
    queue3.link(nvvidconv)
    nvvidconv.link(queue4)
    queue4.link(nvosd)
    nvosd.link(queue5)
    queue5.link(sink)   

    # create an event loop and feed gstreamer bus mesages to it
    loop = GLib.MainLoop()
    bus = pipeline.get_bus()
    bus.add_signal_watch()
    bus.connect ("message", bus_call, loop)
    pgie_src_pad=pgie.get_static_pad("src")
    if not pgie_src_pad:
        sys.stderr.write(" Unable to get src pad \n")
    else:
        if not disable_probe:
            pgie_src_pad.add_probe(Gst.PadProbeType.BUFFER, pgie_src_pad_buffer_probe, 0)
            # perf callback function to print fps every 5 sec
            GLib.timeout_add(5000, perf_data.perf_print_callback)

    # Enable latency measurement via probe if environment variable NVDS_ENABLE_LATENCY_MEASUREMENT=1 is set.
    # To enable component level latency measurement, please set environment variable
    # NVDS_ENABLE_COMPONENT_LATENCY_MEASUREMENT=1 in addition to the above.
    if environ.get('NVDS_ENABLE_LATENCY_MEASUREMENT') == '1':
        print ("Pipeline Latency Measurement enabled!\nPlease set env var NVDS_ENABLE_COMPONENT_LATENCY_MEASUREMENT=1 for Component Latency Measurement")
        global measure_latency
        measure_latency = True

    # List the sources
    print("Now playing...")
    for i, source in enumerate(args):
        print(i, ": ", source)

    print("Starting pipeline \n")
    # start play back and listed to events		
    pipeline.set_state(Gst.State.PLAYING)
    try:
        loop.run()
    except:
        pass
    # cleanup
    print("Exiting app\n")
    pipeline.set_state(Gst.State.NULL)

def parse_args():

    parser = argparse.ArgumentParser(prog="deepstream_test_3",
                    description="deepstream-test3 multi stream, multi model inference reference app")
    parser.add_argument(
        "-i",
        "--input",
        help="Path to input streams",
        nargs="+",
        metavar="URIs",
        default=["a"],
        required=True,
    )
    parser.add_argument(
        "-c",
        "--configfile",
        metavar="config_location.txt",
        default=None,
        help="Choose the config-file to be used with specified pgie",
    )
    parser.add_argument(
        "-g",
        "--pgie",
        default=None,
        help="Choose Primary GPU Inference Engine",
        choices=["nvinfer", "nvinferserver", "nvinferserver-grpc"],
    )
    parser.add_argument(
        "--no-display",
        action="store_true",
        default=False,
        dest='no_display',
        help="Disable display of video output",
    )
    parser.add_argument(
        "--file-loop",
        action="store_true",
        default=False,
        dest='file_loop',
        help="Loop the input file sources after EOS",
    )
    parser.add_argument(
        "--disable-probe",
        action="store_true",
        default=False,
        dest='disable_probe',
        help="Disable the probe function and use nvdslogger for FPS",
    )
    parser.add_argument(
        "-s",
        "--silent",
        action="store_true",
        default=False,
        dest='silent',
        help="Disable verbose output",
    )
    # Check input arguments
    if len(sys.argv) == 1:
        parser.print_help(sys.stderr)
        sys.exit(1)
    args = parser.parse_args()

    stream_paths = args.input
    pgie = args.pgie
    config = args.configfile
    disable_probe = args.disable_probe
    global no_display
    global silent
    global file_loop
    no_display = args.no_display
    silent = args.silent
    file_loop = args.file_loop

    if config and not pgie or pgie and not config:
        sys.stderr.write ("\nEither pgie or configfile is missing. Please specify both! Exiting...\n\n\n\n")
        parser.print_help()
        sys.exit(1)
    if config:
        config_path = Path(config)
        if not config_path.is_file():
            sys.stderr.write ("Specified config-file: %s doesn't exist. Exiting...\n\n" % config)
            sys.exit(1)

    print(vars(args))
    return stream_paths, pgie, config, disable_probe

if __name__ == '__main__':
    stream_paths, pgie, config, disable_probe = parse_args()
    sys.exit(main(stream_paths, pgie, config, disable_probe))

Now we’d like to kindly ask two questions.
a) Why is ntp_timestamp-property always 0 for the DAHUA cameras despite the NTP timestamp being transmitted within the Sender Reports?
b) When changing the date and time of the AXIS cameras to a past time (e.g., to a random date of 2024) while the upper pipeline is running, these changes are immediately reflected in the Sender Reports but not in the ntp_timestamp-property. When the pipeline is restarted, the modified date and time is reflected in the ntp_timestamp-property as well. Does this mean, the NTP timestamps of the Sender Reports are only processed when the RTSP-stream is first established? If so, is there a possibility to process the NTP timestamps of the Sender Reports periodically and not only when the stream starts?

Thank you very much for your time!

You can refer to this topic 273551 to see if it’s the similar issue.
You can add the GST_DEBUG=3,rtpjitterbuffer:6, nvstreammux:6 in the front of your command to check the log info.

Thank you for responding!

a)

I re-ran the pipelines for 30 seconds with GST_DEBUG=3,rtpjitterbuffer:6,nvstreammux:6 and saved the logs.

Logs for AXIS camera:
logs_axis.zip (214.7 KB)

Logs for DAHUA camera:
logs_dahua.zip (1.5 MB)

And indeed, in gst_debug_dahua.log we see the same lines as in the other topic you linked, i.e.:

0:00:00.449697279 e[34m 2661e[00m 0x7a5c68002670 e[37mDEBUG  e[00m e[00m     rtpjitterbuffer gstrtpjitterbuffer.c:4285:do_handle_sync:<rtpjitterbuffer0>e[00m ext SR 1562221477, base 18446744073709551615, clock-rate 0, clock-base 18446744073709551615, last-rtptime 18446744073709551615
0:00:00.449706326 e[34m 2661e[00m 0x7a5c68002670 e[37mDEBUG  e[00m e[00m     rtpjitterbuffer gstrtpjitterbuffer.c:4293:do_handle_sync:<rtpjitterbuffer0>e[00m keeping for later, no RTP values
0:00:00.449714472 e[34m 2661e[00m 0x7a5c68002670 e[37mDEBUG  e[00m e[00m     rtpjitterbuffer gstrtpjitterbuffer.c:4325:do_handle_sync:<rtpjitterbuffer0>e[00m keeping RTCP packet for later

Do you know what could be the cause of this and how to fix it?
In the RTCP Sender Reports posted above I do not see a significant difference between AXIS and DAHUA.

b)

I re-ran the pipeline for 30 seconds for the AXIS camera and changed the date back to 18.04.2024 after approx. 17 seconds.

These are the corresponding logs:
logs_axis_datetime_changed.zip (217.4 KB)

In gst_debug_axis_datetime_changed.log we can see these lines:

0:00:00.762343627 e[36m 2849e[00m 0x769b64003090 e[33;01mWARN   e[00m e[00m     nvstreammux_ntp gstnvstreammux_ntp.cpp:110:check_if_sys_rtcp_time_is_ntp_sync:<Stream-muxer>e[00m Either host or Source 0 seems to be out of NTP sync SYS TIME = 2025-05-09T07:46:31.549Z CALCULATED NTP TIME = 1970-01-01T00:00:00.000Z
...
0:00:03.362717966 e[36m 2849e[00m 0x769b64003090 e[33;01mWARN   e[00m e[00m     nvstreammux_ntp gstnvstreammux_ntp.cpp:300:gst_nvds_ntp_calculator_get_buffer_ntp:<Stream-muxer>e[00m Forward jump in NTP. Prev: 1970-01-01T00:00:00.000Z. Cur: 2025-05-09T07:46:33.566Z source_id 0

which means the NTP time is corrected at the startup of the pipeline.

However, after changing the date after 17 seconds we see:

0:00:17.141901752 e[36m 2849e[00m 0x769b640029d0 e[33;01mWARN   e[00m e[00m     nvstreammux_ntp gstnvstreammux_ntp.cpp:110:check_if_sys_rtcp_time_is_ntp_sync:<Stream-muxer>e[00m Either host or Source 0 seems to be out of NTP sync SYS TIME = 2025-05-09T07:46:47.929Z CALCULATED NTP TIME = 2024-04-18T07:46:00.208Z
...
0:00:17.282409461 e[36m 2849e[00m 0x769b64003090 e[33;01mWARN   e[00m e[00m     nvstreammux_ntp gstnvstreammux_ntp.cpp:171:apply_correction_if_needed_rtcp:<Stream-muxer>e[00m Dropping inconsistent NTP sync values for source 0

which seems like the jump back to 2024 is ignored.
Is this intended behaviour because the time difference is that large?
How can we ensure, that such changes are not ignored and reflect in the ntp_timestamp-property?

Thank you for your time!

Just from the log and the source code of the Gstreamer.

The base_rtptime from your DAHUA camera is -1.
And because all the related ntp-timestamp processing here is based on the Gstreamer source code gstrtpjitterbuffer.c, we suggest that you consult the camera vendor and also file a topic on the Gstreamer forum. They can analyze similar problems more effectively.

Ok, thank you for investigating!

Do you know anything about question b)?

Yes. This part is open source. You can refer to the comment part of that function below.

deepstream\sources\gst-plugins\gst-nvmultistream2\gstnvstreammux_ntp.cpp
/* Apply correction to NTP TS if needed.
 * 1. Check if new NTP < prev NTP. If yes, calculate new NTP as
 *    prev NTP + avg frame time
 * 2. Check if new Sender Report is consistent with prev SR. This is done
 *    by calculating NTP of current buffer with new SR. The difference of this
 *    NTP with prev calculated NTP should be < 1.1 * (current buffer's pts - prev pts).
 *    If this condition is not met, the new SR is ignored.
 */
static inline GstClockTime
apply_correction_if_needed_rtcp (GstNvDsNtpCalculator *calc, GstClockTime ntp_ts, GstClockTime buf_pts)
{

Ok, thank you for the hint!