Insert probes for latency measurements of the pipeline components

Please provide complete information as applicable to your setup.
• Tesla T4
• DeepStream 6.1
• TensorRT 8.2.5.1
**• NVIDIA GPU Driver Version 510.47.03 **
• Insert probes to measure delay

I have been testing Deepstream 6.1 using source1_usb_dec_infer_resnet_int8.txt. As far as I know, it creates 5 components:
There are 5 components:

  1. src_bin_muxer;
  2. primary_gie;
  3. tiled_display_tiler;
  4. osd_conv;
  5. nvosd0.

I would like to put probes in specific positions to calculate the latency. I know that the components below are not open-source (correct me if I am wrong) :

  1. src_bin_muxer;
  2. primary_gie;
  3. tiled_display_tiler;

So I was wondering if anyone could advise if there is any way to calculate the latency for the components above by putting probes in other places. Perhaps, in between pipeline components there are calling function or something.

In QnA for DS 6.1.1 I read this:
"If the open source component cannot be modified to measure latency using APIs mentioned in NVIDIA DeepStream SDK API Reference: Latency Measurement API , then following approach can be used.

You can insert a probe on sink pad of the decoder, measure the time at which input buffer arrives. Insert another probe on sink pad of the sink component and measure the time at which output buffer arrives corresponding to the input buffer . Time difference between these two will give you the latency of the buffer."

Could you help me to allocate those sink pads?
I would appreciate any help.

PS. I know that I can get latency info using NvDs Latency Measurement API. But for the sake of a project, I have to put my own probes.

please refer to The deepstream-test3 demo using rtsp webcam delayed

Dear fanzh thank you for your reply. Are you talking about using:

  1. export NVDS_ENABLE_COMPONENT_LATENCY_MEASUREMENT=1
  2. export NVDS_ENABLE_LATENCY_MEASUREMENT=1

If so, then I am aware of it. But I needed to get those timestamps on my own using my own custom probes.

yes, deepstream SDK has provided method to measure latency.
you can use your own method.

Thank you for your reply.
Yes, I understand, I am asking for advice or suggestion from you.

Is it possible for me to calculate the latency using my own probes for the components below?

  1. src_bin_muxer;
  2. primary_gie;
  3. tiled_display_tiler;

If yes, where would you suggest putting the probes for them?

Thank you!

  1. yes, source_id and frame_num of NvDsFrameMeta are unique, you can save start and end time of every element, the get the delay time of element and pipeline.
  2. you can add sink probe and src probe on the following element.
    BATCH-NUM = 42**
    Comp name = nvv4l2decoder0 in_system_timestamp = 1666856167842.747070 out_system_timestamp = 1666856167846.197998 component latency= 3.450928
    Comp name = nvstreammux-stream-muxer source_id = 0 pad_index = 0 frame_num = 42 in_system_timestamp = 1666856167846.235107 out_system_timestamp = 1666856167846.388916 component_latency = 0.153809
    Comp name = primary-nvinference-engine in_system_timestamp = 1666856167846.506104 out_system_timestamp = 1666856167853.613037 component latency= 7.106934
    Comp name = nvtiler in_system_timestamp = 1666856167853.754883 out_system_timestamp = 1666856167857.861084 component latency= 4.106201
    Comp name = nvvideo-converter in_system_timestamp = 1666856167858.092041 out_system_timestamp = 1666856167860.152100 component latency= 2.060059
    Comp name = nv-onscreendisplay in_system_timestamp = 1666856167860.270996 out_system_timestamp = 1666856167860.284912 component latency= 0.013916
    Source id = 0 Frame_num = 42 Frame latency = 17.585938 (ms)

Got it! Thank you for your reply!

But could you be a bit specific, where exactly in terms of file can I put probes?

Thank you for your replies. I am stuck with my project, I would appreciate your help.

there are two methods:

  1. after creating bin, find the specific element of bin, then add probe, for example: add probe on src_bin_muxer after create_multi_source_bin.
  2. the file path of create elements is deepstream\deepstream\sources\apps\apps-common, these code are opensource, you can add probe on element, for example: add probe on bin->streammux in create_multi_source_bin.

Hello again. Thank you for your constant updates! Highly appreciate it!

I did as you said. I tried to check if it would work:

I know it is not a probe. I first wanted to check if this is the right place and then attach the probe. But this function code (create_multi_source_bin) runs only to create pipeline elements at the beginning. It runs only once.

This element is not visited constantly, meaning I can’t check the component’s latency while it is analyzing frames.

Would it be possible to get the timestamps for running components of the pipeline using custom probes?
Would that be possible?

So yeah, that’s what I am trying to achieve. I know I can get the timestamps using Nvidia Latency API. But I in position that need to calculate using my own probes.

not clear about your questions, you can add probe function on bin->streammux, then this function will be entered once when streammux processes each buffer.

Oh, got it! I will test it and let you know. Thank you very much!

Sorry for the late reply, Is this still an issue to support? Thanks

Hello. Can you keep the thread for 24 hours? I had no time to test the probes by putting them on bin->streammux. I will do it today and let you know if it worked

Also, could you explain a bit, about what exactly you mean by putting the probe on “bin → streammux”. Thank you a lot!

I mean you can add a src probe function on streammux element, please refer to osd_sink_pad_buffer_probe of deepstream-test1.

Thank you for reply! I will test and let you know! Much appreciated!

Sorry for the late reply, Is this still an issue to support? Thanks

Thank you for your response. I have been struggling with implementing the function inside streammux. I am trying to replicate similar to osd_sink_pad_buffer_probe of deepstream-test1. There are some work left.

I will write back soon. Thank you!

thanks for your update, Is this still an issue to support? Thanks

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks