Deepstream Python drop frames dynamically to keep inference real time

Is it possible to set config parameters in the Python Deepstream reference apps to keep the inference real time (i.e. keep processing real time frames) by dynamically dropping frames?

Hi,

YES. There are some configure you can try.

1. Use fp16 model.
Ex. dstest1_pgie_config.txt

[property]
...
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
...

2. Run inference periodically and generate the intermediate output via tracker
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html#gst-nvinfer-file-configuration-specifications

[property]
...
interval=0
...

Thanks.