• Hardware Platform (Jetson / GPU) GPU • DeepStream Version 5.0 • TensorRT Version 7.1.3 • NVIDIA GPU Driver Version (valid for GPU only) CUDA 102
Hi.
I am building a face embedding model to tensorRT. I run successfuly with 1 batch size and it converted from onnx to engine file. but when I change batch size to 2 or larger it raise below error.
I did follow your suggestion, but I still meet the error again. My model is Mobile Arcface which converted from Insightface repo on github. Is there another solution ? thanks
import onnx_graphsurgeon as gs
import onnx
batch = 2
graph = gs.import_onnx(onnx.load("arcface_mobile.onnx"))
for inp in graph.inputs:
inp.shape[0] = batch
for out in graph.outputs:
out.shape[0] = batch
onnx.save(gs.export_onnx(graph), "arcface_mobile_bs.onnx")
Hi,
yes, i am using the same model. So this is setp-by-step that i run the model
Step 1: I convert ArcFace Mobile Net from mxnet to onnx file
Step 2: I copy generated onnx file to my deepstream project
I run the project with batch size equal 1 it works well,
but when i run it with batch size more than 1 then it show above error. I need to run deepstream with multiple sources.
thanks
Since Deepstream need a predefined batchsize value, it’s possible that some parameter in your model still use batchsize==1.
We will check this and update more information with you later.