Deepstream read video file Frame latency is very high

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.2
• NVIDIA GPU Driver Version (valid for GPU only) 470.86
• Issue Type questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

My GPU Graphics card is 3070Ti
I used the deepstream example deepstream-app to run mp4 file, i found some confused questions: some mp4 file analyses fast, fps up to 400 fps; but some mp4 file analyses slow, fps only about100 fps;
these mp4 video file have the same duration, codec and resolution;

So, i don’t understand why videos with speeds of only 100 fps are caused.

For videos that can reach 400 fps, when i “export NVDS_ENABLE_COMPONENT_LATENCY_MEASUREMENT=1”, as follows:

************BATCH-NUM = 602**************
Comp name = nvv4l2decoder0 in_system_timestamp = 1679473795473.941895 out_system_timestamp = 1679473795480.660889               component latency= 6.718994
Comp name = src_bin_muxer source_id = 0 pad_index = 0 frame_num = 602               in_system_timestamp = 1679473795488.502930 out_system_timestamp = 1679473795490.207031               component_latency = 1.704102
Comp name = primary_gie in_system_timestamp = 1679473795490.259033 out_system_timestamp = 1679473795495.218018               component latency= 4.958984
Comp name = tracking_tracker in_system_timestamp = 1679473795495.232910 out_system_timestamp = 1679473795496.385986               component latency= 1.153076
Comp name = secondary_gie_0 in_system_timestamp = 1679473795496.431885 out_system_timestamp = 1679473795496.524902               component latency= 0.093018
Comp name = demuxer in_system_timestamp = 1679473795497.573975 out_system_timestamp = 1679473795497.629883               component latency= 0.055908
Source id = 0 Frame_num = 602 Frame latency = 23.810059 (ms)

************BATCH-NUM = 603**************
Comp name = nvv4l2decoder0 in_system_timestamp = 1679473795476.745117 out_system_timestamp = 1679473795482.543945               component latency= 5.798828
Comp name = src_bin_muxer source_id = 0 pad_index = 0 frame_num = 603               in_system_timestamp = 1679473795490.138916 out_system_timestamp = 1679473795492.517090               component_latency = 2.378174
Comp name = primary_gie in_system_timestamp = 1679473795492.555908 out_system_timestamp = 1679473795497.225098               component latency= 4.669189
Comp name = tracking_tracker in_system_timestamp = 1679473795497.239014 out_system_timestamp = 1679473795498.614014               component latency= 1.375000
Comp name = secondary_gie_0 in_system_timestamp = 1679473795498.665039 out_system_timestamp = 1679473795498.739990               component latency= 0.074951
Comp name = demuxer in_system_timestamp = 1679473795499.761963 out_system_timestamp = 1679473795499.818115               component latency= 0.056152
Source id = 0 Frame_num = 603 Frame latency = 23.236816 (ms)

and i use the nvidia-smi dmon, as follows:

# gpu   pwr gtemp mtemp    sm   mem   enc   dec  mclk  pclk
# Idx     W     C     C     %     %     %     %   MHz   MHz
    0   250    61     -    99    49     0    28  9251  1965
    0   248    61     -    93    47     0    26  9251  1965
    0   246    62     -    98    43     0    28  9251  1965
    0   245    62     -    99    47     0    28  9251  1965
    0   249    63     -    99    50     0    27  9251  1965
    0   250    62     -    99    49     0    27  9251  1965
    0   244    62     -    99    43     0    28  9251  1965
    0   251    63     -   100    51     0    27  9251  1965
    0   246    63     -    99    44     0    29  9251  1950
    0   259    64     -   100    54     0    26  9251  1950
    0   263    63     -    99    43     0    28  9251  1950

For videos that can reach only about 100fps, as follows:

************BATCH-NUM = 545**************
Comp name = src_bin_muxer source_id = 0 pad_index = 0 frame_num = 0               in_system_timestamp = 1679474703927.888916 out_system_timestamp = 1679474703930.166992               component_latency = 2.278076
Comp name = primary_gie_conv in_system_timestamp = 1679474703945.535889 out_system_timestamp = 1679474703947.906006               component latency= 2.370117
Comp name = primary_gie in_system_timestamp = 1679474703947.943115 out_system_timestamp = 1679474703960.968994               component latency= 13.025879
Comp name = tracking_tracker in_system_timestamp = 1679474703960.978027 out_system_timestamp = 1679474703961.758057               component latency= 0.780029
Comp name = secondary_gie_0 in_system_timestamp = 1679474703961.824951 out_system_timestamp = 1679474703961.929932               component latency= 0.104980
Comp name = demuxer in_system_timestamp = 1679474703962.837891 out_system_timestamp = 1679474703962.864014               component latency= 0.026123
Source id = 0 Frame_num = 0 Frame latency = 1679474703962.969971 (ms)

************BATCH-NUM = 546**************
Comp name = src_bin_muxer source_id = 0 pad_index = 0 frame_num = 0               in_system_timestamp = 1679474703930.614014 out_system_timestamp = 1679474703936.020996               component_latency = 5.406982
Comp name = primary_gie_conv in_system_timestamp = 1679474703951.680908 out_system_timestamp = 1679474703953.752930               component latency= 2.072021
Comp name = primary_gie in_system_timestamp = 1679474703953.804932 out_system_timestamp = 1679474703967.907959               component latency= 14.103027
Quitting
Comp name = tracking_tracker in_system_timestamp = 1679474703967.916016 out_system_timestamp = 1679474703968.798096               component latency= 0.882080
Comp name = secondary_gie_0 in_system_timestamp = 1679474703968.900879 out_system_timestamp = 1679474703969.014893               component latency= 0.114014
Comp name = demuxer in_system_timestamp = 1679474703969.896973 out_system_timestamp = 1679474703969.958008               component latency= 0.061035
Source id = 0 Frame_num = 0 Frame latency = 1679474703970.139893 (ms)

use the nvidia-smi dmon, as follows:

# gpu   pwr gtemp mtemp    sm   mem   enc   dec  mclk  pclk
# Idx     W     C     C     %     %     %     %   MHz   MHz
    0   150    49     -    99    17     0     0  9251  1950
    0   156    50     -    99    17     0     0  9251  1950
    0   155    51     -    98    17     0     0  9251  1950
    0   156    51     -    99    18     0     0  9251  1950
    0   156    52     -    98    17     0     0  9251  1950
    0   157    52     -    99    18     0     0  9251  1950
    0   157    53     -    99    18     0     0  9251  1950
    0   158    53     -    99    17     0     0  9251  1950
    0   162    53     -    99    17     0     0  9251  1950
    0   158    54     -    99    18     0     0  9251  1980
    0   161    54     -    99    18     0     0  9251  1980
    0   162    54     -    99    17     0     0  9251  1980
    0   162    55     -    99    18     0     0  9251  1980

my cfg file like this:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=2
columns=2
width=1280
height=720
gpu-id=1
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=2
uri=file:///home/asdf/test.mp4
num-sources=1
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=1
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=0
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
batch-size=1
interval=0
gie-unique-id=1
process-mode=1
nvbuf-memory-type=1
config-file=primary.txt

[tracker]
enable=1
gpu-id=0
# For the case of NvDCF tracker, tracker-width and tracker-height must be a multiple of 32, respectively
tracker-width=384
tracker-height=192
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so

#ll-config-file required for DCF/IOU only
ll-config-file=tracker_config.yml

[secondary-gie0]
enable=1
gpu-id=0
#(0): nvinfer; (1): nvinferserver
plugin-type=0
batch-size=16
gie-unique-id=2
process-mode=2
operate-on-gie-id=1
operate-on-class-ids=0
config-file=secondary.txt


[tests]
file-loop=0

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Have you monitored the CPU loading and GPU loading during the app running?

You can monitor the GPU loading with “nvidia-smi dmon”.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.