Are there any indicators when considering this batch-size, because I’ve changed the value but I couldn’t find any difference in the results.(So far I’m using 1 as my default)
Does increasing the batch-size usually improve performance?
Also does this mean, for example, if there are 4 frames with 3 objects each, and I run with batch size=1, it will only infer 1 frame(with all 3 objects) in a batch?
gst-nvinfer batch-size is a parameter to describe the inference model run with gst-nvinfer. It means the max batch-size of your model. The max batch size of the model is decided when the model is generated(it has nothing to do with deepstream, you need to consult the person who train and generate the model)
If you batch the 4 frames with nvstreammux into one batch, and your inference model support max batch size is no less than 4, the pipeline can infer 4 frames in a batch.
Thank you for replying.
How should you set the batch size for nvinfer if there is a secondary GIE?
For example,
4 sources(frames?) as an input
⇓
batch size for nvstreammux would be okay to set as 4
⇓
primary AND a secondary GIE
⇓
batch size for nvinfer would be 8…?