Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I have a pipeline with a SGIE classifier operating on bounding-boxes from a DetectNet PGIE. The PGIE has a batch-size of 32, and is doing inference on fully formed batches.
Due to the nature of the video we are processing, there will be between ~ 0-128 detected objects in a batch - should I set the SGIE batch-size to the maximum expected objects (128) or perhaps to the average amount of objects expected (perhaps ~80)?. Do you have any best practices or recommendations regarding the SGIE batch-size as to make it as efficient as possible?. Real-time processing is not a requirement.
Thanks in advance,