How can I use the dynamic shape of tensorrt ?

Description

Can the engine model generated based on dynamic size support forward inference for images of different sizes ?

Environment

TensorRT Version: 7.2.1.6
GPU Type: 2080Ti
Nvidia Driver Version: 440
CUDA Version: 10.2
CUDNN Version: 7.6.5
Operating System + Version: centos7
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): 1.14
PyTorch Version (if applicable): -
Baremetal or Container (if container which image + tag):

For example

If I want to implement model input dimension dynamics,for example:

my detection model need input images size is 12807203,6403603,3201803.

so.if I can do it in this way:

trtexec --explicitBatch --onnx=model.onnx \
--minShapes=input:1x320x180x3 \
--optShapes=input:1x640x360x3 \
--maxShapes=input:1x1280x720x3 \
--shapes=input:1x640x360x3\      # Actual inference input shape
--saveEngine=model.engine

and I don’t know what optShapes,shapes means?

can you help me ?

tanks!

Hi @1965281904,
The below link will help you in understanding the dynamic shapes

https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec#example-4-running-an-onnx-model-with-full-dimensions-and-dynamic-shapes

Please check the below link for the same

Thanks!

Hi, AakankshaS!
I want to know what do I need to do if I want the model to support input images of different sizes to do inference? I don’t need to support multiple batch inference.

For example,I need to put image1 (w=608,h=608,c=3)、image2(w=1152,h=608),and I don’t want to scale it to a uniform size.

How can I solve this problem?

thanks!

Hi @1965281904 ,
To use dynamic, the onnx model itself must be dynamic.
min ~ max specify the range the engine can run with.
And the engine is optimzed for opt profiles.

Thanks!