Confused About PTS values

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) :- dGPU
• DeepStream Version :- 7.1

I have a basic pipeline.
uridecodebin -> streammux -> pgie -> sgie1 -> sgie2 -> sink

Lets say that I have a 100 second long video, 1080p at 30FPS.

And I have two set of hardware I can run this pipeline with this video.

hardware 1 :- super powerful. The entire 100 second video gets processes in 10 second.
hardware 2 :- super slow The entire 100 second video gets processes in 100 second.

I’m trying to figure out a way to get the actual frame time.
what I mean by that is hard for me to explain, so I will use an example to explain that.

lets say that in the above video, there is a vehicle at 10th second and another vehicle at 90th second.

but since the fast hardware processes the video in just 10 seconds, I get the detection/events at/after 1 second and 9th second.

what I want is to obtain the actual IN-VIDEO time of the detection? Is that possible ?

what I have tried so far :-
[1] Obtained PTS values of frames and pipeline start time. It didnt give me any good result.

any help would be appreciated.
it this is not the right forum to ask this question, please let me know.

PTS is the correct basis.

  1. When you set the sync property of the sink to false, the pipeline will lose synchronization and run at the fastest speed, which will manifest as different processing speeds on different devices.

Assuming your input video is 30fps and you use nv3dsink/nveglglessink by default, the pipeline processing speed will not exceed 30fps.

  1. What container format is your stream in? mp4/mkv? If it is something like .h264/.h265, without a container format, the pts is likely to be inaccurate.

Currently the container is mp4.
What I want is, process the pipeline at Max FPS/speed possible and still get the IN_VIDEO_TIME calculation correctly ?
by IN_VIDEO_TIME, I mean, any frame that is n seconds from the start of the video, should have PTS of n. something like this.

I am open to any container format.
I am not open to set sync property to true, because I want max possible FPS.

It is like this, for example, the 100th frame always has the same PTS no matter how many times it is inferred. How do you obtain the PTS that causes the difference?

It is like this, for example, the 100th frame always has the same PTS no matter how many times it is inferred. How do you obtain the PTS that causes the difference?

what I want is, I want the 89th frame to have the same PTS everytime* irrespective of the speed at which the pipeline gets processed.

I have a few different set of hardware. Some super fast some okayish.
and I have a deepstream pipeline, which I run on a video (1000 second @25FPS).
In the video, there is a vehicle at say 900th second. (which means at frame number 22500)

Hardware 1 :- processes the entire video in 100 seconds.
Hardware 2 :- processes the entire video in 1000 seconds.

what I want is, The PTS of of 22500th frame to be same on both hardware.

do you understand ? I am at a loss and not able to explain it correctly, I feel.

I get the buffer PTS by using gstreamer API.

GST_BUFFER_PTS

does it make any difference if I read the PTS before streammux or after streammux ?

Yes, it should be so. I understand what you mean. Can you provide sample code and test stream to reproduce the problem?

The deepstream element will not modify the PTS. So PTS should be consistent

allright. thanks.

would provide code soon.

sample pipeline

Hey, I got it working. This sample works fine.

I have observed this.
the above pipeline, processed a 1 hour video in 10 minutes. The Buffer PTS of last frame was equal to the Video Duration. This is what I want. And this is what above app is also doing.

The problem is coming when, I add this part of logic in my production code (which is a different pipeline, but same arch and runs video files), the Buffer PTS that is getting printed is basically, for how long the pipeline has been running.
pipeline, processed a 1 hour video in 10 minutes. The Buffer PTS of last frame was equal to 10 minutes. .

I am trying to understand what can be going wrong ?
following is how the streammux is configured.

set_streammux_properties (NvDsStreammuxConfig * config, GstElement * element)
{
  gboolean ret = FALSE;
  const gchar *new_mux_str = g_getenv ("USE_NEW_NVSTREAMMUX");
  gboolean use_new_mux = !g_strcmp0 (new_mux_str, "yes");

  if (!use_new_mux) {
    g_object_set (G_OBJECT (element), "gpu-id", config->gpu_id, NULL);

    g_object_set (G_OBJECT (element), "nvbuf-memory-type",
        config->nvbuf_memory_type, NULL);

    g_object_set (G_OBJECT (element), "live-source", config->live_source, NULL);

    g_object_set (G_OBJECT (element),
        "batched-push-timeout", config->batched_push_timeout, NULL);

    g_object_set (G_OBJECT (element), "compute-hw", config->compute_hw, NULL);

    if (config->buffer_pool_size >= 4) {
      g_object_set (G_OBJECT (element),
          "buffer-pool-size", config->buffer_pool_size, NULL);
    }

    g_object_set (G_OBJECT (element), "enable-padding",
        config->enable_padding, NULL);

    if (config->pipeline_width && config->pipeline_height) {
      g_object_set (G_OBJECT (element), "width", config->pipeline_width, NULL);
      g_object_set (G_OBJECT (element), "height",
          config->pipeline_height, NULL);
    }
    if (!config->use_nvmultiurisrcbin) {
      g_object_set (G_OBJECT (element), "async-process",
          config->async_process, NULL);
    }

  }

  if (config->batch_size && !config->use_nvmultiurisrcbin) {
    g_object_set (G_OBJECT (element), "batch-size", config->batch_size, NULL);
  }

   g_object_set (G_OBJECT (element), "attach-sys-ts",
       config->attach_sys_ts_as_ntp, NULL);

  if (config->config_file_path) {
    g_object_set (G_OBJECT (element),
        "config-file-path", GET_FILE_PATH (config->config_file_path), NULL);
  }

  g_object_set (G_OBJECT (element), "frame-duration",
      config->frame_duration, NULL);

  g_object_set (G_OBJECT (element), "frame-num-reset-on-stream-reset",
      config->frame_num_reset_on_stream_reset, NULL);

  g_object_set (G_OBJECT (element), "sync-inputs", config->sync_inputs, NULL);

  g_object_set (G_OBJECT (element), "max-latency", config->max_latency, NULL);
  g_object_set (G_OBJECT (element), "frame-num-reset-on-eos",
      config->frame_num_reset_on_eos, NULL);
  g_object_set (G_OBJECT (element), "drop-pipeline-eos", config->no_pipeline_eos,
      NULL);

  if (config->extract_sei_type5_data) {
    g_object_set (G_OBJECT (element), "extract-sei-type5-data", config->extract_sei_type5_data,
        NULL);
  }
  if (config->num_surface_per_frame > 1) {
      g_object_set (G_OBJECT (element), "num-surfaces-per-frame",
          config->num_surface_per_frame, NULL);
  }

  ret = TRUE;

  return ret;
}

I am not sure where else to look


this is my pipeline.

Are you using nvstreammux to process multiple videos at the same time?

If so, you need to use nvstreamdemux, otherwise the PTS of the GstBuffer you get in nvstreamux is batch pts, not frame pts.

nvstreammux will batch multiple video frames, which is also a GstBuffer

I am using nvstreammux the number of streams is just one. same as the test pipeline (code of which is attached above) has.
and that works fine. I get the proper PTS.

you there ?

I cannot access your sample code due to permission issues.

Since the test pipeline works fine, it is not a issue of deepstream sdk.

You should check the code in the production environment. try adding a probe function on each pad to check if the PTS is modified. Or try to get framemeta->buf_pts, this value comes from the pts of the video frame

okay. Let me check. also the sample app is just deepstream-test3

thanks. It worked.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.