_NvDsFrameMeta.ntp_timestamp is 0 for first 4 seconds of video

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.0 DP

I have a pipeline with rtsp sources. nvstreammux has attach-sys-ts=0 set and I have called configure_source_for_ntp_sync(src) on my rtsp sources.

As I start to get inference results on a probe after the tracker, I get ntp_timestamp == 0 for the first 80 or so frames (4 seconds on 20 fps source), but then the timestamp starts coming through correctly as non zero. It is inconsistent in how long it takes to get the timestamp. Sometimes a meaningful value comes in on frame 0, but if there is a delay its always on frames in the 80 range.

What is the cause for the delay in getting non zero ntp_timestamp?

Is there something happening in configure_source_for_ntp_sync which is non-synchronous which is the reason for the delay and is there anything I can do to shorten or eliminate it? The configure_source_for_ntp_sync call is not open sources so I can’t see what it is doing. If it is making a request to the source (network camera) I can see this needing to be async and maybe the 80 frame clustering of getting meaningful values is aligned to the next iFrame??

As a work around I wonder if I can insert delay between calling configure_source_for_ntp_sync and starting the pipeline… or is the work done in configure_source_for_ntp_sync delayed until the pipeline is running?

The main job of configure_source_for_ntp_sync() is to set rtpjitterbuffer to handle RTP and RTCP correctly. There is no request to RTSP server in rtpjitterbuffer. The NTP timestamp comes from RTCP sender report https://www.ietf.org/rfc/rfc1889.txt from RTSP server. Please check your RTSP server for the interval between the sender reports.

Thanks for the reply!

I don’t add rtpjitterbuffer to my pipeline… so is it configure_source_for_ntp_sync that is inserting this or does the standard pipeline building of gstreamer insert this as it connects my rtsp source into other components on my behalf?

My RTSP server is a camera and none of the configuration pages exposes the interval. It will take me a moment to try to learn some client side tools to see if I can figure out the RTCP sender report interval. Is this typically in the 4 second range which would explain the variance in when this becomes available? And I’m not quite sure what the solution will be for me since I can’t configure this on my (most?) ONVIF cameras. Even if I set it to a fraction of this I still need to account for some missed frames. Is there a way I can start the pipeline, and then stop then restart the pipeline without losing the state?

My pipeline has a Tee right after h264parse where one branch of the Tee goes into nvstreammux and another branch goes straight into an hlssink2 for writing to disk for archive and playback without having to go through a re-encode. What I’d really like to do is get the NTP on the messages going into hlssink2 so I can corollate between inference frame output and saved frames which operate on different queues after the Tee. Since I don’t have NTP making it into the video fragments (.ts) written by hlssink2 I’ve been doing correlation by frame count which I presume is after I have any drops so it shouldn’t drift. But I need to get all frames across cameras correlated by NTP so need to drop the frames where I don’t have a valid NTP time to get everything to line up.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

If you want to use NTP from the RTSP server(which is your camera in your case), you must control your RTSP server to make it to send SR a when you need.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.