Please provide complete information as applicable to your setup. • Tesla T4 • DeepStream 6.1 • TensorRT 8.2.5.1
**• NVIDIA GPU Driver Version 510.47.03 ** • Insert probes to measure delay
I have been testing Deepstream 6.1 using source1_usb_dec_infer_resnet_int8.txt. As far as I know, it creates 5 components:
There are 5 components:
src_bin_muxer;
primary_gie;
tiled_display_tiler;
osd_conv;
nvosd0.
I would like to put probes in specific positions to calculate the latency. I know that the components below are not open-source (correct me if I am wrong) :
src_bin_muxer;
primary_gie;
tiled_display_tiler;
So I was wondering if anyone could advise if there is any way to calculate the latency for the components above by putting probes in other places. Perhaps, in between pipeline components there are calling function or something.
You can insert a probe on sink pad of the decoder, measure the time at which input buffer arrives. Insert another probe on sink pad of the sink component and measure the time at which output buffer arrives corresponding to the input buffer . Time difference between these two will give you the latency of the buffer."
Could you help me to allocate those sink pads?
I would appreciate any help.
PS. I know that I can get latency info using NvDs Latency Measurement API. But for the sake of a project, I have to put my own probes.
yes, source_id and frame_num of NvDsFrameMeta are unique, you can save start and end time of every element, the get the delay time of element and pipeline.
you can add sink probe and src probe on the following element. BATCH-NUM = 42**
Comp name = nvv4l2decoder0 in_system_timestamp = 1666856167842.747070 out_system_timestamp = 1666856167846.197998 component latency= 3.450928
Comp name = nvstreammux-stream-muxer source_id = 0 pad_index = 0 frame_num = 42 in_system_timestamp = 1666856167846.235107 out_system_timestamp = 1666856167846.388916 component_latency = 0.153809
Comp name = primary-nvinference-engine in_system_timestamp = 1666856167846.506104 out_system_timestamp = 1666856167853.613037 component latency= 7.106934
Comp name = nvtiler in_system_timestamp = 1666856167853.754883 out_system_timestamp = 1666856167857.861084 component latency= 4.106201
Comp name = nvvideo-converter in_system_timestamp = 1666856167858.092041 out_system_timestamp = 1666856167860.152100 component latency= 2.060059
Comp name = nv-onscreendisplay in_system_timestamp = 1666856167860.270996 out_system_timestamp = 1666856167860.284912 component latency= 0.013916
Source id = 0 Frame_num = 42 Frame latency = 17.585938 (ms)
after creating bin, find the specific element of bin, then add probe, for example: add probe on src_bin_muxer after create_multi_source_bin.
the file path of create elements is deepstream\deepstream\sources\apps\apps-common, these code are opensource, you can add probe on element, for example: add probe on bin->streammux in create_multi_source_bin.
I know it is not a probe. I first wanted to check if this is the right place and then attach the probe. But this function code (create_multi_source_bin) runs only to create pipeline elements at the beginning. It runs only once.
This element is not visited constantly, meaning I can’t check the component’s latency while it is analyzing frames.
Would it be possible to get the timestamps for running components of the pipeline using custom probes? Would that be possible?
So yeah, that’s what I am trying to achieve. I know I can get the timestamps using Nvidia Latency API. But I in position that need to calculate using my own probes.
not clear about your questions, you can add probe function on bin->streammux, then this function will be entered once when streammux processes each buffer.
Hello. Can you keep the thread for 24 hours? I had no time to test the probes by putting them on bin->streammux. I will do it today and let you know if it worked
Thank you for your response. I have been struggling with implementing the function inside streammux. I am trying to replicate similar to osd_sink_pad_buffer_probe of deepstream-test1. There are some work left.
thanks for your update, Is this still an issue to support? Thanks
There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks