How to correct NTP timestamp by using every RTCP sender report

Hardware Platform (Jetson / GPU) NX
• DeepStream Version 6.1.1
• JetPack Version (valid for Jetson only) 5.02
• TensorRT Version 8.4

Hi,
I am using test5app to process a h264 stream via rtsp.
And I found that after nvv4l2 decoding, the timestamp: NvDsFrameMeta->rtp_timestamp
is based on the FIRST RTCP SR, and on the following frames, related timestamp is calculated by
adding pts to the FIRST absolute timestamp. After thousands of frames processed, there’s a error
around several seconds appeared, that make me not able to sync with my host device.
So, would you please info me, how to let deepstream-app receive every SR and recalculate the following timestamp during the SR interval.
Thank you.

Can you consider using system timestamp as pts for your scenario?You can refer to the link below:attach-sys-ts

“using system timestamp as pts” cannot let me sync with the host device of rtsp source, I do need the absolute timestamp from the host continuously in one short period.

Deepstream 6.2 document says:
2. NTP timestamp when attached at RTSP source - supported only if RTSP sources send RTCP Sender Reports (SR).

To configure the pipeline to attach these timestamps:

  • Set attach-sys-ts to FALSE on nvstreammux. Set the attach-sys-ts-as-ntp config parameter to 0 in [streammux] group of the application configuration file in the DeepStream reference app.
  • After creating an “rtspsrc” element or an “uridecodebin” element, application must call configure_source_for_ntp_sync() function and pass the GstElement pointer to this API. (Refer to create_rtsp_src_bin() in deepstream_source_bin.c file.) The API internally configures the pipeline to parse sender report and calculate NTP timestamps for each frame.
  • Make sure RTSP source can send RTCP Sender Reports.

in deepstream_source_bin.c, I found 3 times of following:
if (g_strrstr (config->uri, “rtsp://”) == config->uri) {
configure_source_for_ntp_sync (bin->src_elem);
}
I donnot understand how to let deepstream read every SR, and get source timestamp for every SR interval.
Please help. Thank you.

I wonder if rtspsrc can and only can read the FIRST SR from the source.
And is it possible to access the other SR in preprocess plugin?

In deepstream, where can I access the GstRTCPBuffer?
I found the method:

gst_rtcp_packet_sr_get_sender_info

gst_rtcp_packet_sr_get_sender_info (GstRTCPPacket * packet, guint32 * ssrc, guint64 * ntptime, guint32 * rtptime, guint32 * packet_count, guint32 * octet_count)

Parse the SR sender info and store the values.

As you attached before: NTP timestamp when attached at RTSP source - supported only if RTSP sources send RTCP Sender Reports (SR).. We have already parse the info from the SR with the configure_source_for_ntp_sync callback. If you follow this instruction to configure, the pts will come from the SR.

hi, thank you for reply. Would you please provide a bit more details?

As you use test5app, set the parameters of nvstreammux below in your config file.
1.set attach-sys-ts to FALSE on nvstreammux.
2.set the attach-sys-ts-as-ntp config parameter to 0
3.set the live-source to 1

Hi,
the above configurations are already satisfied.

And I am sure deepstream already parsed the FIRST SR’s timestamp,
I found NvDsFrameMeta->ntp_timestamp in pregprocess plug-in is just the same with
the timestamp of 1st SR.
and the timestamps of following frames are calculated by NvDsFrameMeta->ntp_timestamp + PTS.
This method caused a accumulated error, up to several seconds after thousands of frames transfered.
What I need is that deepstream can parse the following SR, and update NvDsFrameMeta->ntp_timestamp, and let reconfigure PTS,start from 0 again.
That will avoid the accumulated timestamp error and make the inter-device sync possible.
The API in deepstream_source_bin.c,
if (g_strrstr (config->uri, “rtsp://”) == config->uri) {
configure_source_for_ntp_sync (bin->src_elem);
It needs variant bin->src_elem,
In the end, my question is, what’s the API for rtcp SR in other plug-in, like preprocess plug-in?
what’s the variant?
Thank you.

1 Like

What do you mean and let reconfigure PTS, start from 0 again.? Now we get the SR and calculate the pts with our algorithm.
Could you provide a detailed description of your requirements for PTS?
About the bin->src_elem parameter, it’s a gstreamer plugin. If you can ensure that all your sources are rtsp, you can try to add the preprocess plugin for it.

Now my PTS is like following:
p0 = NTP_timestamp(fron First SR)
p0, p0 + pts, p0 +2pts, p0+3pts … …
I want :
pn = NTP_timestamp(fron the nth SR)
p0, p0 + pts, p0 +2pts, p0+3pts . util 2nd SR arrived,
p1, p1 + pts, p1 +2pts, p1+3pts . util 3rd SR arrived,
… …
pn, pn + pts, pn+2pts, pn+3pts . util (n+1)th SR arrived,

Above PTS will not bring a accumulated time error.

We only synchronize once after the pipeline runs through the configure_source_for_ntp_sync api. Currently, we do not support the situation you mentioned. Could you try to synchronize multiple times with g_timeout_add API?

Ok, I see.
Thank you.
I put a similiar topic on RTSP. close it ok.

Could you try to set the frame-duration of nvstreammux to -1 and test it? It will disables frame rate based NTP timestamp correction.

well, I did frame-duration=-1
It seems no change.

OK. Could you attach the method to verify the error after thousands of frames you said?

following is my actual read image/encode/publish time on host:
2023-06-01 14:22:55.704 2023-06-01 14:22:55.710 2023-06-01 14:22:55.711
2023-06-01 14:22:55.744 2023-06-01 14:22:55.750 2023-06-01 14:22:55.752
2023-06-01 14:22:55.785 2023-06-01 14:22:55.792 2023-06-01 14:22:55.793
2023-06-01 14:22:55.825 2023-06-01 14:22:55.832 2023-06-01 14:22:55.833
2023-06-01 14:22:55.865 2023-06-01 14:22:55.872 2023-06-01 14:22:55.873
2023-06-01 14:22:55.905 2023-06-01 14:22:55.921 2023-06-01 14:22:55.923
2023-06-01 14:22:55.945 2023-06-01 14:22:55.950 2023-06-01 14:22:55.951
2023-06-01 14:22:55.985 2023-06-01 14:22:55.989 2023-06-01 14:22:55.990
2023-06-01 14:22:56.025 2023-06-01 14:22:56.029 2023-06-01 14:22:56.030
2023-06-01 14:22:56.065 2023-06-01 14:22:56.069 2023-06-01 14:22:56.070
2023-06-01 14:22:56.105 2023-06-01 14:22:56.110 2023-06-01 14:22:56.111
2023-06-01 14:22:56.146 2023-06-01 14:22:56.150 2023-06-01 14:22:56.151
2023-06-01 14:22:56.186 2023-06-01 14:22:56.190 2023-06-01 14:22:56.191
2023-06-01 14:22:56.226 2023-06-01 14:22:56.232 2023-06-01 14:22:56.233
2023-06-01 14:22:56.266 2023-06-01 14:22:56.272 2023-06-01 14:22:56.274
2023-06-01 14:22:56.306 2023-06-01 14:22:56.323 2023-06-01 14:22:56.325
2023-06-01 14:22:56.346 2023-06-01 14:22:56.351 2023-06-01 14:22:56.352
2023-06-01 14:22:56.386 2023-06-01 14:22:56.392 2023-06-01 14:22:56.393
2023-06-01 14:22:56.427 2023-06-01 14:22:56.443 2023-06-01 14:22:56.445
2023-06-01 14:22:56.467 2023-06-01 14:22:56.471 2023-06-01 14:22:56.472
2023-06-01 14:22:56.507 2023-06-01 14:22:56.511 2023-06-01 14:22:56.511
2023-06-01 14:22:56.547 2023-06-01 14:22:56.566 2023-06-01 14:22:56.568
2023-06-01 14:22:56.587 2023-06-01 14:22:56.594 2023-06-01 14:22:56.595
2023-06-01 14:22:56.628 2023-06-01 14:22:56.648 2023-06-01 14:22:56.650
2023-06-01 14:22:56.668 2023-06-01 14:22:56.678 2023-06-01 14:22:56.683
2023-06-01 14:22:56.708 2023-06-01 14:22:56.719 2023-06-01 14:22:56.721
2023-06-01 14:22:56.748 2023-06-01 14:22:56.756 2023-06-01 14:22:56.758
2023-06-01 14:22:56.788 2023-06-01 14:22:56.797 2023-06-01 14:22:56.799
2023-06-01 14:22:56.828 2023-06-01 14:22:56.846 2023-06-01 14:22:56.848
2023-06-01 14:22:56.868 2023-06-01 14:22:56.876 2023-06-01 14:22:56.877
2023-06-01 14:22:56.909 2023-06-01 14:22:56.919 2023-06-01 14:22:56.920
2023-06-01 14:22:56.949 2023-06-01 14:22:56.956 2023-06-01 14:22:56.958

and below is latest redis messages:

    1. “1685600557548-0”
      1. “metadata”
      2. “{\n "id" : "1950",\n "objects" : [\n "|| 2023-06-01T06:22:36.948Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600557588-0”
      1. “metadata”
      2. “{\n "id" : "1951",\n "objects" : [\n "|| 2023-06-01T06:22:36.988Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600557629-0”
      1. “metadata”
      2. “{\n "id" : "1952",\n "objects" : [\n "|| 2023-06-01T06:22:37.028Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600557666-0”
      1. “metadata”
      2. “{\n "id" : "1953",\n "objects" : [\n "|| 2023-06-01T06:22:37.068Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600557707-0”
      1. “metadata”
      2. “{\n "id" : "1954",\n "objects" : [\n "|| 2023-06-01T06:22:37.108Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600557749-0”
      1. “metadata”
      2. “{\n "id" : "1955",\n "objects" : [\n "|| 2023-06-01T06:22:37.148Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600557799-0”
      1. “metadata”
      2. “{\n "id" : "1956",\n "objects" : [\n "|| 2023-06-01T06:22:37.188Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600557828-0”
      1. “metadata”
      2. “{\n "id" : "1957",\n "objects" : [\n "|| 2023-06-01T06:22:37.228Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600557865-0”
      1. “metadata”
      2. “{\n "id" : "1958",\n "objects" : [\n "|| 2023-06-01T06:22:37.268Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600557923-0”
      1. “metadata”
      2. “{\n "id" : "1959",\n "objects" : [\n "|| 2023-06-01T06:22:37.308Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600557954-0”
      1. “metadata”
      2. “{\n "id" : "1960",\n "objects" : [\n "|| 2023-06-01T06:22:37.348Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600557993-0”
      1. “metadata”
      2. “{\n "id" : "1961",\n "objects" : [\n "|| 2023-06-01T06:22:37.388Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600558029-0”
      1. “metadata”
      2. “{\n "id" : "1962",\n "objects" : [\n "|| 2023-06-01T06:22:37.428Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600558069-0”
      1. “metadata”
      2. “{\n "id" : "1963",\n "objects" : [\n "|| 2023-06-01T06:22:37.468Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600558109-0”
      1. “metadata”
      2. “{\n "id" : "1964",\n "objects" : [\n "|| 2023-06-01T06:22:37.508Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600558151-0”
      1. “metadata”
      2. “{\n "id" : "1965",\n "objects" : [\n "|| 2023-06-01T06:22:37.548Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600558211-0”
      1. “metadata”
      2. “{\n "id" : "1966",\n "objects" : [\n "|| 2023-06-01T06:22:37.588Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600558261-0”
      1. “metadata”
      2. “{\n "id" : "1967",\n "objects" : [\n "|| 2023-06-01T06:22:37.628Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600558277-0”
      1. “metadata”
      2. “{\n "id" : "1968",\n "objects" : [\n "|| 2023-06-01T06:22:37.668Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600558310-0”
      1. “metadata”
      2. “{\n "id" : "1969",\n "objects" : [\n "|| 2023-06-01T06:22:37.708Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600558345-0”
      1. “metadata”
      2. “{\n "id" : "1970",\n "objects" : [\n "|| 2023-06-01T06:22:37.748Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
    1. “1685600558386-0”
      1. “metadata”
      2. “{\n "id" : "1971",\n "objects" : [\n "|| 2023-06-01T06:22:37.788Z ||0 || 0.765942 || 3.97997|454.473|57.033|504.045|pos"\n ]\n}”
        127.0.0.1:6379>

the last frame’s publishing time is 56.956 seconds
the last redis time is 37.788 s
time error is nearly 19secs.

OK. Could you attach a simplified demo that can verity this, including how to build a Redis receiving environment? We can try it in our environment.

I am testing that on ubuntu 20, down load redis 6.08, change 3 lines of redis.conf,
to substitue the ip 127.0.0.1 for 192.168.55.100, and bind 127.0.0.1 192.168.55.100,
protect mode = no
cd redis-xxx
src/redis-server redis.conf
another teminal,
cd redis-xxx
src/redis-cli