I use caffe-onnx project to generate a R100 onnx model. Then I got the error when I use trtexec to generate an engine from this R100 onnx model.
Environment
TensorRT Version: 7.0 GPU Type: DGPU Nvidia Driver Version: 440.44 CUDA Version: 10.2 CUDNN Version: 7.6 Operating System + Version: Deepstream 5.0 container Python Version (if applicable): 3.6 Baremetal or Container (if container which image + tag): nvcr.io/nvidia/deepstream :5.0.1-20.09-triton
Relevant Files
Steps To Reproduce
When caffe-onnx generated the R100.onnx, it was with batch size 1.
And I want to get a dynamic batch onnx model. So I use the code to support dynamic batch.
Then, I use next command to generate an engine and I got an error:
[TensorRT] INTERNAL ERROR: Assertion failed: mg.nodes[mg.regionIndices[outputRegion]].size == mg.nodes[mg.regionIndices[inputRegion]].size …/builder/cudnnBuilderBlockChooser.cpp:127
The onnx file is too big to upload.
So, I tried the resnet50 which is in the git repo caffe-onnx/caffemodel/resnet-50 to reproduce the bug.
To get a dynamic batchsize input and output, I use onnx_graphsurgeon to change the onnx.
My caffe-onnx/convert2onnx.py is uploaded. You can check the fuction do_change_batch().convert2onnx.py (2.5 KB)
Also, When I use the next command to get tensorRT engine, the same error occurred again.([F] [TRT] Assertion failed: mg.nodes[mg.regionIndices[outputRegion]].size == mg.nodes[mg.regionIndices[inputRegion]].size
…/builder/cudnnBuilderBlockChooser.cpp:127)