Doubt regarding deepstream-app interval and batch-size

• Hardware Platform (Jetson / GPU)
• DeepStream Version
deepstream-app version 6.0.1
DeepStreamSDK 6.0.1
• JetPack Version (valid for Jetson only)
JetPack 4.6, L4T 32.6.1
• TensorRT Version
TensorRT 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only)
Non GPU version
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Run the default deepstream-nvdsanalytics-test using the provided configuration file, and capture the output in a log file.

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Requirements to run the default deepstream-nvdsanalytics-test app.

The question I have is that regardless of the interval I choose in the main configuration file for the application, under the [primary gie] field I choose, the frames outputted from the inference seem to not vary.

E.g. when I run the deepstream-nvdsanalytics-test app with an interval of 0 and a batch-size of 1, I can see that no frames are skipped in the output, so far so good, but then I chose a 20 interval with a 1 batch-size the output remains the same, meaning that I can see frames outputted counting from frame number 0 to N (the same as if the inference was not skipping any frames), but regarding GPU usage it seems to respect the interval configuration.

Using for reference the definitions provided in deepstream-app reference application. Interval is Number of consecutive batches to skip for inference. and batch-size is The number of frames(P.GIE)/objects(S.GIE) to be inferred together in a batch. From both of these definitions I was expecting that if I run the interval 20, batch-size 1 config for the app, the frame 0 would be inferred, then 20 frames would be skipped, then frame 21 would be inferred and so on…

Am I misunderstanding something here? Any pointers would be greatly appreciated, thanks beforehand.
Configurations used are provided here file_configuration.tar.xz (8.5 MB)

Screenshots from both apps runs are provided here, followed by jtop screenshots to display GPU usage differences:

Logs from interval 0 and batch-size 0:
interval0.txt (14.0 KB)

Logs from interval 20 and batch-size 0:
interval20.txt (18.4 KB)

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Yes, your understanding is correct.
Could you test with our demo first: samples\configs\deepstream-app\source2_1080p_dec_infer-resnet_demux_int8.txt.
You can change the interval=50 to see if the result is right.
Also, could you update the Jetpack and Deepstream version to the latest(JP 5.1, Deepstream 6.2)?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.