Tiruna
April 13, 2018, 9:43am
1
I use Deepstream process several videos, but these videos are the same.
I print the batch size in the execute function of one customer module as follows.
std::vector inputShape = vpInputTensors[0]->getShape();
vpInputTensors[0] is the frame data decoded by the Deepstream.
But the inputShape[0] is unstable, what’s the reasons.
Hi,
Could you share the number of videos you decoding at the same time and the observation of inputShape value with us?
Thanks.
Tiruna
April 17, 2018, 1:13am
3
I try both 4K(38402160) and 1080P(1920 1080) videos.
The actual inputShape are as follows.
number of videos inputShape
6 43 21603840(BatchSize ChannelHeight Width)
5 33 21603840
3 2 32160 3840
24 143 19201080
19 12 31920 1080
16 103 1920*1080
But the BatchSize is unstable, the above is the mean value across 7,000 inferences approximately.
Hi,
Does the non-stability occurs on batchsize only?
If yes, this behavior should be okay since the batch of inference depends on the decoding rate.
Thanks.