Now I can change dynamic batch size in inference for TensorRT.
Batch size can be varied between min to max according to optimization profile.
But the issue is at plugin.
Even though, I use IPluginV2DynamicExt. Input/Output dimension are fixed.
For example, plugin input dimension is batchsize(lets say 10) x 20 x 40.
batchsize is fixed as 10 in plugin creation time.
When I change batch size at input of network dynamically (i.e. every iteration of inference, batch size is changed), the plugin has fixed input size as 10x20x40 and fixed output size, always 10.
It has never changed with different batch size.
Is it possible to change the plugin’s batch size dynamically?