Best batch size for app using PeopleSemSegNet on Jetson Xavier NX

• Hardware Platform (Jetson / GPU) Jetson Xavier NX
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only) Unknown
• TensorRT Version 8.4
• Issue Type( questions, new requirements, bugs) questions

Hi, I’m building an app which uses PeopleSemSegNet.

I want to improve output stream’s fps and I found there is an parameter called batch size.

What is batch size? And, what is a desirable value for it in order to improve the fps?

Please give me your advice.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

It has diffrent meanings in diffrent plugins. You’d better read our guide first. Thanks
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvstreammux.html
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.