Error when convert onnxt to tensorRT with batch size more than 1

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0
• TensorRT Version 7.1.3
• NVIDIA GPU Driver Version (valid for GPU only) CUDA 102

Hi.
I am building a face embedding model to tensorRT. I run successfuly with 1 batch size and it converted from onnx to engine file. but when I change batch size to 2 or larger it raise below error.


I dont know that why? How can you help me to solve this problem?
thanks a lot !

Hi,

The error is usually caused by the ONNX model is built with batchsize=1.
Please change the batchsize into the corresponding size or dynamic batch.

You can check the ONNX dimension with Netron page directly:
https://lutzroeder.github.io/netron/

Here is an example for the modification for your reference:

Thanks.

I did follow your suggestion, but I still meet the error again. My model is Mobile Arcface which converted from Insightface repo on github. Is there another solution ? thanks

Hi,

To give a further suggestion, would you mind to share the model with us?
Thanks.

Hi,
Here is my onnx file https://drive.google.com/file/d/1uBYdO8hCgfveq4IDQWGWdJKM84JBX5wW/view?usp=sharing
Please help me if you can convert to engine file with batch size more than 1 succesfully.
Thanks

Hi,

Would you mind the check this script?

import onnx_graphsurgeon as gs
import onnx

batch = 2

graph = gs.import_onnx(onnx.load("arcface_mobile.onnx"))
for inp in graph.inputs:
    inp.shape[0] = batch
for out in graph.outputs:
    out.shape[0] = batch

onnx.save(gs.export_onnx(graph), "arcface_mobile_bs.onnx")

Thanks.

Thanks for your helping . I successfully build with batch size equal 2. However I meet other error with Reshape, Flatten… like below


please help me, thanks

Hi,

Are you using the same model?
To give a further suggestion, would you mind to share a reproducible steps of the latest error?

Thanks.

Hi,
yes, i am using the same model. So this is setp-by-step that i run the model

Step 1: I convert ArcFace Mobile Net from mxnet to onnx file
Step 2: I copy generated onnx file to my deepstream project
I run the project with batch size equal 1 it works well,
but when i run it with batch size more than 1 then it show above error. I need to run deepstream with multiple sources.
thanks

Hi,
I meet the same error with other model. My model is CRNN which converted from Pytorch. Is this an error of deepstream 5.0?
thanks

Hi,

Since Deepstream need a predefined batchsize value, it’s possible that some parameter in your model still use batchsize==1.
We will check this and update more information with you later.

Thanks.

Hi,
Thank you so much.