Measuring DeepStream latency can't work

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): jetson nano
• DeepStream Version:
• JetPack Version (valid for Jetson only): 4.6.3
• TensorRT Version: 8.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs): questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

Hi,

I wanna to figure out which component is the greatest latency in deepstream-app, so I enable the env variable.
for frame
NVDS_ENABLE_LATENCY_MEASUREMENT
for components
NVDS_ENABLE_COMPONENT_LATENCY_MEASUREMENT

But it can’t work and I try to find the problem.

In the function of latency_measurement_buf_prob(), the nvds_enable_latency_measurement variable is still 0, while I enable the env variable.

Could you provide any solution? Thanks.

config

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri = rtsp.....
num-sources=1
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=1
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
#model-engine-file=model_b1_gpu0_fp16.engine
labelfile-path=labels.txt
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=2
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV3_tiny.txt

[tracker]
enable=1
# For NvDCF and DeepSORT tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_nvmultiobjecttracker.so
# ll-config-file required to set different tracker types
# ll-config-file=../../samples/configs/deepstream-app/config_tracker_IOU.yml
ll-config-file=../../samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml
# ll-config-file=../../samples/configs/deepstream-app/config_tracker_NvDCF_accuracy.yml
# ll-config-file=../../samples/configs/deepstream-app/config_tracker_DeepSORT.yml
gpu-id=0
enable-batch-process=1
enable-past-frame=1
display-tracking-id=1

[tests]
file-loop=0

if you developed your own DS C++ code, you could also refer to DeepStream SDK FAQ - #12 by bcao 4 to measure the latency of the pipeline comonents

So nvds_enable_latency_measurement=0 is correct.

That is sample deepstream-app.

In the document, nvds_get_enable_latency_measurement will return TRUE if NVDS_ENABLE_LATENCY_MEASUREMENT is exported. But it returns FALSE in my device.
https://docs.nvidia.com/metropolis/deepstream/6.0.1/sdk-api/group__ee__nvlatency__group.html

can you share the output of “echo $NVDS_ENABLE_LATENCY_MEASUREMENT”?

The output of “echo $NVDS_ENABLE_LATENCY_MEASUREMENT”

sorry, this function is not in SDK, but I can’t reproduce this issue on xavier+DS6.2, I added log in osd_sink_pad_buffer_probe of test1, nvds_enable_latency_measurement is true after exporting NVDS_ENABLE_LATENCY_MEASUREMENT, here is the test log:
if(nvds_enable_latency_measurement){
printf(“xx1\n”);
}else{
printf(“xx2\n”);
}
how did you test?

I also added log in osd_sink_pad_buffer_probe of test1, but nvds_enable_latency_measurement is false.
if(nvds_enable_latency_measurement){
printf(“xx1\n”);
}else{
printf(“xx2\n”);
}

After the test, I check env variable again.

So this problem may exist in nano? Is possible? Or I ignore any steps.
Thanks.

did you try root permission? can you use a C code to read the environment NVDS_ENABLE_LATENCY_MEASUREMENT? did you tried reinstalling?

if I export NVDS_ENABLE_LATENCY_MEASUREMENT=1 only, the frame latency can be seen on log. But export NVDS_ENABLE_LATENCY_MEASUREMENT=1 and NVDS_ENABLE_COMPONENT_LATENCY_MEASUREMENT=1 at the same time, it failed.

could you share why the frame latency can be seen time time?

can you reopen a terminal window to test?

At the beginning, I set the below env variables, but fail.

export NVDS_ENABLE_LATENCY_MEASUREMENT=1
export NVDS_ENABLE_COMPONENT_LATENCY_MEASUREMENT=1

Now I use below only, the frame latency can be seen on log.

export NVDS_ENABLE_LATENCY_MEASUREMENT=1

The component latency still can’t be seen.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

please refer to delay, If you are using other deepstream sample apps such as deepstream-test3, you need to apply the patch.

we can’t reproduce this issue, please refer to this topic delayed, this user can also get component latency.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.