Error with using custom onnx model

Hello, I am trying to load a custom onnx model trained in azure custom vision using the jetson.inference library.

when I first used the line:

net = jetson.inference.detectNet(argv=["--model=jetson-inference/python/training/detection/ssd/models/mymodel/model.onnx", "--labels=jetson-inference/python/training/detection/ssd/models/mymodel/labels.txt", "--input-blob=input_0", "--output-cvg=scores", "output-bbox=boxes"

I received the error : Error 4 Internal error (network has dynamic or shape inputs, but no optimization profile has been defined)

i looked around these forums and found that i may need to reformat my model using this script found here https://forums.developer.nvidia.com/t/error-when-convert-onnxt-to-tensorrt-with-batch-size-more-than-1/155713/6:

import onnx_graphsurgeon as gs
import onnx

batch = 1

graph = gs.import_onnx(onnx.load("model.onnx"))
for inp in graph.inputs:
    inp.shape[0] = batch
for out in graph.outputs:
    out.shape[0] = batch

onnx.save(gs.export_onnx(graph), "rebuilt.onnx")

but when trying to run the rebuilt model I get this error: Cannot find binding of given name: input_0.

what should i do?

thank you!

Hi @ecmay, if you look in the detectnet log, it will print out the names of the input/output layers:

[TRT]       binding 0
                -- index   0
                -- name    'Input'
                -- type    FP32
                -- in/out  INPUT
                -- # dims  3
                -- dim #0  3 (SPATIAL)
                -- dim #1  300 (SPATIAL)
                -- dim #2  300 (SPATIAL)
[TRT]       binding 1
                -- index   1
                -- name    'NMS'
                -- type    FP32
                -- in/out  OUTPUT
                -- # dims  3
                -- dim #0  1 (SPATIAL)
                -- dim #1  100 (SPATIAL)
                -- dim #2  7 (SPATIAL)
[TRT]       binding 2
                -- index   2
                -- name    'NMS_1'
                -- type    FP32
                -- in/out  OUTPUT
                -- # dims  3
                -- dim #0  1 (SPATIAL)
                -- dim #1  1 (SPATIAL)
                -- dim #2  1 (SPATIAL)

so you should be able to find their new names this way

jetson-inference doesn’t necessarily support arbitrary DNN architectures out of the box. You may have to adapt the pre/post-processing to match what your model expects:

Right now for ONNX detection models, it’s setup for ssd-mobilenet models that were trained with train_ssd.py like in the Hello AI World tutorial.

thanks for the reply! Ill look into the logs now.

is using onnx models from azure custom vision best? they have different models I can export into, like tensorflow and some sub categories of tensorflow

I haven’t used the models from Azure before, and all the ONNX models that I have used have been from PyTorch. That’s not to say that they’re any better than models trained with TensorFlow, it just means that is how I have the pre/post-processing setup in jetson-inference.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.