Meaning of different batch-size parameters

In the deepstream app’s spec files, there are several batch-size parameters. I am a little confused as to what exactly they mean. For example - If I have 2 rtsp streams, what is the batch size which I should keep in streammux, primary-gie ? There is also one batch-size parameter in config_primary_nano.txt file .I wish to have clarity regarding all of this.

Me too. The documentation does not seem to entirely match the implementation. For example, the documentation says things like:

But I have heard it said here that batch-size must match the number of sources and temporal batching is not supported. I mentioned in another thread, if this is the case, can’t the parameter be automatically configured, and propagated downstream? Clarification would be nice.

  1. Regarding the batch-size in streammux
    1.1 doc - https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream_Plugin_Manual%2Fdeepstream_plugin_details.02.03.html
    Please feel free to let us know if the describtion is still not clear enough.
    1.2 for your case: 2 rtsp streams, it’s fine to use batch-size=1 or batch-size=2 , which a little depends on the performance of following component, e.g. nvinfer throughtput.
    for example, considering pipeline : “2 x rtsp @ 30fps/stream --> 2 x decoding --> streammux () --> nvinfer” if nvinfer can process 60fps@batch_size==1, you may could use batch-size=1, if nvinfer can process less than 60fps@batch_size==1, but 60fps@batch_size==2, it’s recommneded to use batch_size=2

  2. Regarding the batch-size in config_primary_nano.txt
    it’s for nvinfer plugin which warps TensorRT, so it’s actually the batch_size used to build the TensorRT engine, to be exact, it’s the batchSize of setMaxBatchSize(int batchSize) call - https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/c_api/classnvinfer1_1_1_i_builder.html#a7285560854aec37979363e1d71709bfe .
    Since the TensorRT in current DeepStream does not support dynamic shape, so the max batch for nvinfer can not be larger than the batch-size in config_primary_nano.txt . And, if the input batch is less than batch-size, nvinfer still runs with batch-size (that is, the same perf as batchsize = batch-size).

@mdegans, regarding what you mentioned, maybe it was the behavior of old DeepStream since the batch behavior of streammux was changed a lot in DeepStream 4.0.x .