Best practices for SGIE batch-size

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)


I have a pipeline with a SGIE classifier operating on bounding-boxes from a DetectNet PGIE. The PGIE has a batch-size of 32, and is doing inference on fully formed batches.
Due to the nature of the video we are processing, there will be between ~ 0-128 detected objects in a batch - should I set the SGIE batch-size to the maximum expected objects (128) or perhaps to the average amount of objects expected (perhaps ~80)?. Do you have any best practices or recommendations regarding the SGIE batch-size as to make it as efficient as possible?. Real-time processing is not a requirement.

Thanks in advance,


1 Like

Do you mean you have 32 source streams?

You should set the SGIE batch size to the value of the model’s maximum batch size(suppose your model is dynamic batch). It is decided by your model not by the source.

Hi @Fiona.Chen,

My SGIE classifier is a darknet_53 trained with TAO, thus I assume that the batch-size is dynamic and without any real maximum batch-size (except for hardware limitations)?

How would you approach this?


So you need to try whether the maximum batch size 128 can work within the hardware limitation. Including memory limitation.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.