Why the actual batch size is unstable when Deepstream process multi videos?

I use Deepstream process several videos, but these videos are the same.
I print the batch size in the execute function of one customer module as follows.

std::vector inputShape = vpInputTensors[0]->getShape();

vpInputTensors[0] is the frame data decoded by the Deepstream.
But the inputShape[0] is unstable, what’s the reasons.


Could you share the number of videos you decoding at the same time and the observation of inputShape value with us?

I try both 4K(38402160) and 1080P(19201080) videos.
The actual inputShape are as follows.

number of videos inputShape
6 4321603840(BatchSizeChannelHeightWidth)
5 3321603840
3 2
24 14319201080
19 12
16 1031920*1080

But the BatchSize is unstable, the above is the mean value across 7,000 inferences approximately.


Does the non-stability occurs on batchsize only?
If yes, this behavior should be okay since the batch of inference depends on the decoding rate.