Cannot get latency measurement result

• Hardware Platform : T4
• DeepStream Version: 5.0 without docker
• TensorRT Version: 7.0
• NVIDIA GPU Driver Version: 440.64

Hi,
I want to test the pipeline component and buffer latency just like Latency measurement issue.

However when I export NVDS_ENABLE_LATENCY_MEASUREMENT=1 before I run deepstream-app with rtsp source and rtsp sink, I get the result of below. The pipeline can run correctly.

BATCH-NUM = 0**
Batch meta not found for buffer 0x7f8ab8009350
BATCH-NUM = 1**
Batch meta not found for buffer 0x7f8ab8009df0
BATCH-NUM = 2**
Batch meta not found for buffer 0x7f8ab8009bd0
BATCH-NUM = 3**
Batch meta not found for buffer 0x7f8ab800b1b0
BATCH-NUM = 4**
Batch meta not found for buffer 0x7f8acc05f050
BATCH-NUM = 5**
Batch meta not found for buffer 0x7f8ab800b1b0
BATCH-NUM = 6**
Batch meta not found for buffer 0x55642a83ce30
BATCH-NUM = 7**
Batch meta not found for buffer 0x7f8acc01a3d0
BATCH-NUM = 8**
Batch meta not found for buffer 0x7f8ab800b4e0
BATCH-NUM = 9**
Batch meta not found for buffer 0x7f8acc042260
BATCH-NUM = 10**
Batch meta not found for buffer 0x7f8ab8009ce0

Meanwhile, when I set NVDS_ENABLE_COMPONENT_LATENCY_MEASUREMENT to 1 , I get nothing result.

Please tell me what should I do. Thanks.

Your application should exit from here,
sources/apps/sample_apps/deepstream-app/deepstream_app.c::process_buffer
NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);
if (!batch_meta) {
NVGSTDS_WARN_MSG_V (“Batch meta not found for buffer %p”, buf);
return;
}
Not sure why your buffer not metadata attached,
can you run sample sources/apps/sample_apps/deepstream-infer-tensor-meta-test/
It is meant for simple demonstration of how to access infer-plugin’s output tensor
data for DeepStream SDK elements in the pipeline.
Can you paste your config used?

Hi amycai, thanks for your reply.

The rtsp output is normal though msg is “Batch meta not found for buffer”, as I can see bbox in out frame.

Below is my config.
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
#uri=rtsp://admin:jiaxun123@192.168.170.65:554/Streaming/Channels/1?transportmode=unicast
uri=rtsp://10.9.4.133/10001
num-sources=1
gpu-id=0
cudadec-memtype=0
num-extra-surfaces=5

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=4
codec=1
bitrate=2000000
rtsp-port=9554
udp-port=9400
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
qos=0

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
model-engine-file=0_model_b1_gpu0_int8.engine
labelfile-path=0_labels.txt
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=0_config_infer_primary_yoloV3.txt

[tracker]
enable=0
tracker-width=640
tracker-height=384
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so

[tests]
file-loop=0

Which sample you used to run for measurements of components latency?

See it, it’s deepstream-app, right? did you make any changes?

Yes, the sample I used is deepstream-app and I changed nothing.

You need to enable Tiler in config file as it is necessary for latency measurement.

hi amycao, it can’t solve the problem when I set tiled-display enable.

I also test in jetson xavier with deepstream5.0, and get the same result.

Can you try with sink type 1 or 2? does not have the issue with these types, just tried these two types.

Yes, I get the result when I change sink from type 4 to tyep 1.
But I wonder why the result is abnormal when I set rtsp sink.
Thanks

@zongxp Unfortunately, current deepstream-app latency measurement is based on the NvDsBatchMeta (https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_metadata.2.1.html#) in NV Deepstream plugins. Only when the sink plugin is directly connected to NV Deepstream plugins such as nvstreamdemux, nvv4l2decoder,… , the NvDsBatchMeta can be queried on the sink pad. For rtsp sink case, the sink plugin is directly connected to rtph264pay plugin which can not transfer customized NvDsBatchMeta, so the query of NvDsBatchMeta fails.
To summary, currently the latency measurement can only be supported with fakesink and eglglessink.