Dynamic batch size

Description

Hi I am new to TensorRT and I am trying to build a trt engine with dynamic batch size.
I already have an onnx model with input shape of -1x299x299x3, but when I was trying
to convert onnx to trt with following command:
trtexec --onnx=model_Dense201_BM_FP32_Flex.onnx --saveEngine=model_Dense201_BM_FP32_Flex.trt --explicitBatch
The output showed the following line: Dynamic dimensions required for input: input_2, but no shapes were provided. Automatically overriding shape to: 1x299x299x3
Are there any suggestions on how do I fix this issue? Thanks!
image

Environment

TensorRT Version: 7.2.2
GPU Type: Tesla V100-SXM2-32GB
Nvidia Driver Version: 450.51.06
CUDA Version: 11.2
CUDNN Version: 11.2.67
Operating System + Version: DGX OS 4.5.0
Python Version (if applicable): 3.8.5
TensorFlow Version (if applicable): 2.2.0
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Hi,

We recommend that you use the most recent TensorRT version 8.5.2.

In case you would like to build the TRT engine with dynamic shapes using the trexec tool, please refer to the following:

Also, please refer to the following document (the developer guide) for more details.

Thank you.

Hi, thanks for the reply! Sorry I didn’t make my question clear, what I am asking is how to generate a trt engine that accepts dynamic batch inputs when inferencing with enqueueV2, the C++ API instead of how to run an onnx model with trtexec.

Please refer to the following sample which demonstrates how to use dynamic input dimensions in TensorRT.

For more info, please refer,

Thank you.