Description
Hello, I made an ONNX model with dynamic dimensions of input [batch,3,height,width] by repository:
https://github.com/triple-Mu/YOLOv8-TensorRT/tree/triplemu/dynamic
After that I started trtexec with the following parameters:
trtexec --onnx=YOLOv8-dbs-dbd.onnx --saveEngine=YOLOv8-dbs-dbd.trt --explicitBatch=12 --minShapes=images:1x3x480x480 --optShapes=images:6x3x512x512 --maxShapes=images:12x3x640x640 --memPoolSize=workspace:3000
After that, I got a TensorRT engine from my model with dynamic dimensions and the batch size, then I start allocating CUDA memory, for example, for the size of 480 pixels. And I get an error
7: [shapeMachine.cpp::executeContinuation::887] Error Code 7: Internal Error (Add_430: dimensions not compatible for elementwise. Condition '==' violated: 5376 != 4725. Instruction: CHECK_EQUAL 5376 4725.)
The code swears at 430 Add node. It looks like this:
Why does it not work for me with dimension 480, if the model has dynamic dimensions at the input? And how to work properly with CUDA memory in general, if the model is with dynamic shapes?
Environment
TensorRT Version: 9.0.1
NVIDIA GPU: RTX3060
NVIDIA Driver Version: 535
CUDA Version: 11.1
CUDNN Version: 8.0.4
Operating System:
Python Version (if applicable): 3.8
Relevant Files
link to ONNX model: YOLOv8-dbs-dbd.onnx
link to TensorRT model: YOLOv8-dbs-dbd.trt