Questions about frequently changing the batch size

Description

I used to use context->enqueue(batchsize, …) to infer with implicit batches.
Now I have to use these codes with trt7:
context->setOptimizationProfile(0); // 0 is the first profile, 1 is the second profile, etc.
context->setBindingShape(0, Dims3(batchsize, 3, 384, 1280)); // 0 is the first input binding, you may have multiple input bindings
context->executeV2(…)

If my actual ‘batchsize’ changes time by time, will these extra ‘setBindingDimensions’ codes be the performance issue?

Environment

TensorRT Version: 7.2.2
GPU Type: 2070Super
Nvidia Driver Version: 456.71
CUDA Version: 11.0.3
CUDNN Version: 8.0.5
Operating System + Version: Win10

Hi @439290087,

Sorry for the late reply.
There is a small but non-zero overhead of calling setBindingDimensions. The overhead depends on which layers are used in your network. If the overhead is too much, one option would be to maintain multiple ExecutionContexts - one for each batch size of interest.

Thank you.