Migrated from DeepStream 4 to Deepstream 5 and got errors

Hello.
Original issue, which is ONNX on DP5, partially resolved. Because with batch-size=1 DP5 can make and use engine by itself. But with batch size bigger than 1 - it crashes. And i can make engine with batch-size>1 with TRT, but DP5, when loading it, issue error described in Migrated from DeepStream 4 to Deepstream 5 and got errors
So i can use ONLY batch size of 1 with ONNX models.

Because of all this, problem is partially solved.

Hi,

Have you re-generate the engine file?
It is possible that Deepstream use the exist engine file which created with batchsize=1 and causes this error.

Thanks.

Hello.
Yes, it is re-generated. Problem is in batch-size > 1, batch-size=1 is working OK. And i do re-generate it in DP5.

Best regards.

Hello.
I am sure that there will be solution to DP5 + ONNX with batch size > 1.
You can close this thread.

Best regards.

Hi,

Not sure which solution do you find.
Here is our suggestion for your reference.

Assertion Error in buildMemGraph: 0 (mg.nodes[mg.regionIndices[outputRegion]].size == mg.nodes[mg.regionIndices[inputRegion]].size)

Based on above log, the error occurs from an onnx model doesn’t generate with the correct batchsize.
Since you try to use batchsize=2, the model need to be generated with batchsize 2 or dynamic batchsize.

We can reproduce this error with our /usr/src/tensorrt/data/resnet50/ResNet50.onnx model.
To solve this issue, we re-generate the onnx file for batchsize==2.
This can be achieved via our ONNX GraphSurgeon API:
https://github.com/NVIDIA/TensorRT/tree/master/tools/onnx-graphsurgeon

1. Install

$ https://github.com/NVIDIA/TensorRT.git
$ cd TensorRT/tools/onnx-graphsurgeon/
$ make install

2. Generate your own convert.py.
Here is our sample for resnet50.
In general, we change the input batch, output batch and the reshape operation right before the output layer.

import onnx_graphsurgeon as gs
import onnx

batch = 2

graph = gs.import_onnx(onnx.load("ResNet50.onnx"))
for inp in graph.inputs:
    inp.shape[0] = batch
for out in graph.outputs:
    out.shape[0] = batch

# update reshape from [1, 2048] to [2, 2048]
reshape = [node for node in graph.nodes if node.op == "Reshape"]
reshape[0].inputs[1].values[0] = batch

onnx.save(gs.export_onnx(graph), "ResNet50_dynamic.onnx")
python3 convert.py

3.
The you can replace the onnx model with the dynamic one.
We have confirmed that Deepstream can run the ResNet50_dynamic.onnx without issue in our environment.

Thanks.

Thank you, will try this ASAP.

Hello.
I am stuck at $ make install from 1.
With python cannot find setuptools with setuptools installed.
Please help, four days i am trying :-(

Hi,

Sorry for the late update.

Please try the following to see if helps:

$ sudo apt-get update
$ sudo apt-get install python3-pip
$ sudo pip3 install -U pip testresources setuptools

Thanks.

Hello, AastaLLL.
Thanks for update, but i did those steps and still got that python cannot find setuptools.
What else i can do?

Hi,

Are you using python3?

Thanks.

Yes, Python3.

Hi,

Sorry for the late update.
Would you mind to try following command to see if helps?

$ sudo apt-get install python3-setuptools

Thanks.

Hello.

There is another error:
rm -rf dist/ build/ onnx_graphsurgeon.egg-info/
python3 setup.py bdist_wheel
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] …]
or: setup.py --help [cmd1 cmd2 …]
or: setup.py --help-commands
or: setup.py cmd --help

error: invalid command ‘bdist_wheel’
Makefile:24: recipe for target ‘build’ failed
make: *** [build] Error 1

Hi,

You don’t need to run python3 setup.py bdist_wheel.
It can be installed by the following command directly:

$ https://github.com/NVIDIA/TensorRT.git
$ cd TensorRT/tools/onnx-graphsurgeon/
$ make install

Thanks.

Hello. I do just sudo make install inside of TensoRT/tools/onnx-graphsurgeon/
.
But i got that error message.
I did removed all data from TensorRT and again git cloned it, same error.

Hi,

Thanks for your feedback.

Some dependencies need to be installed first.
We find a clean environment and the onnx-graphsurgeon can be installed with the following command:

$ sudo apt-get install python3-pip libprotobuf-dev protobuf-compiler
$ git clone https://github.com/NVIDIA/TensorRT.git
$ cd TensorRT/tools/onnx-graphsurgeon/
$ make install

Thanks.