Tensorrt, convert pytorch onnx module dynamic batch failed

I use pytorch and convert pt to onnx. buy when I convert onnx to trt module with dynamic batch size. It failed. why ?
the cmd is

./trtexec --onnx=/home/xxx/xxx/work/onnx_0209/thySegUnet-eb1.onnx --fp16 --workspace=1024 --saveEngine=thy_my_batch.trt --minShapes=input:1x3x512x512 --optShapes=input:2x3x512x512 --maxShapes=input:15x3x512x512 --maxBatch=15 --best
[02/11/2022-11:10:49] [E] The --batch and --maxBatch flags should not be used when the input model is ONNX or when dynamic shapes are provided. Please use --optShapes and --shapes to set input shapes instead.

if I remove ‘–maxBatch=15’ it can convert ok. but Runtime BufferManager report an assert error with

TensorRT_Test: /home/xxx/xxx/work/TensorRT-8.2.2.1/samples/common/buffers.h:250: samplesCommon::BufferManager::BufferManager(std::shared_ptr<nvinfer1::ICudaEngine>, int, const nvinfer1::IExecutionContext*): Assertion `engine->hasImplicitBatchDimension() || mBatchSize == 0' failed.

I don’t know why. And open the Netron I can see

name: input
type: float32[batch_size,3,512,512]
name: output
type: float32[batch_size,1,512,512]

could anyone help me ? my Envs:

ubuntu 18.04 with linux kernel 5.4.0-99-generic
gpu:  rtx-3070
nvidia-driver: 470.74
cuda-driver: 11.4
cuda-runtime: 11.1
tensorrt: 8.2.2.1