• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0
• TensorRT Version 7.1.3
• NVIDIA GPU Driver Version (valid for GPU only) CUDA 102
I am building a face embedding model to tensorRT. I run successfuly with 1 batch size and it converted from onnx to engine file. but when I change batch size to 2 or larger it raise below error.
I dont know that why? How can you help me to solve this problem?
thanks a lot !
The error is usually caused by the ONNX model is built with batchsize=1.
Please change the batchsize into the corresponding size or dynamic batch.
You can check the ONNX dimension with Netron page directly:
Here is an example for the modification for your reference:
Not sure which solution do you find.
Here is our suggestion for your reference.
Assertion Error in buildMemGraph: 0 (mg.nodes[mg.regionIndices[outputRegion]].size == mg.nodes[mg.regionIndices[inputRegion]].size)
Based on above log, the error occurs from an onnx model doesn’t generate with the correct batchsize.
Since you try to use batchsize=2, the model need to be generated with batchsize 2 or dynamic batchsize.
We can reproduce this error with our
I did follow your suggestion, but I still meet the error again. My model is Mobile Arcface which converted from Insightface repo on github. Is there another solution ? thanks
To give a further suggestion, would you mind to share the model with us?
Here is my onnx file https://drive.google.com/file/d/1uBYdO8hCgfveq4IDQWGWdJKM84JBX5wW/view?usp=sharing
Please help me if you can convert to engine file with batch size more than 1 succesfully.
Would you mind the check this script?
import onnx_graphsurgeon as gs
batch = 2
graph = gs.import_onnx(onnx.load("arcface_mobile.onnx"))
for inp in graph.inputs:
inp.shape = batch
for out in graph.outputs:
out.shape = batch
Thanks for your helping . I successfully build with batch size equal 2. However I meet other error with Reshape, Flatten… like below
please help me, thanks
Are you using the same model?
To give a further suggestion, would you mind to share a reproducible steps of the latest error?
yes, i am using the same model. So this is setp-by-step that i run the model
Step 1: I convert ArcFace Mobile Net from mxnet to onnx file
Step 2: I copy generated onnx file to my deepstream project
I run the project with batch size equal 1 it works well,
but when i run it with batch size more than 1 then it show above error. I need to run deepstream with multiple sources.
I meet the same error with other model. My model is CRNN which converted from Pytorch. Is this an error of deepstream 5.0?
Since Deepstream need a predefined batchsize value, it’s possible that some parameter in your model still use
We will check this and update more information with you later.