Error when convert onnxt to tensorRT with batch size more than 1

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0
• TensorRT Version 7.1.3
• NVIDIA GPU Driver Version (valid for GPU only) CUDA 102

I am building a face embedding model to tensorRT. I run successfuly with 1 batch size and it converted from onnx to engine file. but when I change batch size to 2 or larger it raise below error.

I dont know that why? How can you help me to solve this problem?
thanks a lot !


The error is usually caused by the ONNX model is built with batchsize=1.
Please change the batchsize into the corresponding size or dynamic batch.

You can check the ONNX dimension with Netron page directly:

Here is an example for the modification for your reference:


I did follow your suggestion, but I still meet the error again. My model is Mobile Arcface which converted from Insightface repo on github. Is there another solution ? thanks


To give a further suggestion, would you mind to share the model with us?

Here is my onnx file
Please help me if you can convert to engine file with batch size more than 1 succesfully.


Would you mind the check this script?

import onnx_graphsurgeon as gs
import onnx

batch = 2

graph = gs.import_onnx(onnx.load("arcface_mobile.onnx"))
for inp in graph.inputs:
    inp.shape[0] = batch
for out in graph.outputs:
    out.shape[0] = batch, "arcface_mobile_bs.onnx")


Thanks for your helping . I successfully build with batch size equal 2. However I meet other error with Reshape, Flatten… like below

please help me, thanks


Are you using the same model?
To give a further suggestion, would you mind to share a reproducible steps of the latest error?


yes, i am using the same model. So this is setp-by-step that i run the model

Step 1: I convert ArcFace Mobile Net from mxnet to onnx file
Step 2: I copy generated onnx file to my deepstream project
I run the project with batch size equal 1 it works well,
but when i run it with batch size more than 1 then it show above error. I need to run deepstream with multiple sources.

I meet the same error with other model. My model is CRNN which converted from Pytorch. Is this an error of deepstream 5.0?


Since Deepstream need a predefined batchsize value, it’s possible that some parameter in your model still use batchsize==1.
We will check this and update more information with you later.


Thank you so much.