import onnx_graphsurgeon as gs
import onnx
batch = 1
graph = gs.import_onnx(onnx.load("model.onnx"))
for inp in graph.inputs:
inp.shape[0] = batch
for out in graph.outputs:
out.shape[0] = batch
onnx.save(gs.export_onnx(graph), "rebuilt.onnx")
but when trying to run the rebuilt model I get this error: Cannot find binding of given name: input_0.
Hi @ecmay, if you look in the detectnet log, it will print out the names of the input/output layers:
[TRT] binding 0
-- index 0
-- name 'Input'
-- type FP32
-- in/out INPUT
-- # dims 3
-- dim #0 3 (SPATIAL)
-- dim #1 300 (SPATIAL)
-- dim #2 300 (SPATIAL)
[TRT] binding 1
-- index 1
-- name 'NMS'
-- type FP32
-- in/out OUTPUT
-- # dims 3
-- dim #0 1 (SPATIAL)
-- dim #1 100 (SPATIAL)
-- dim #2 7 (SPATIAL)
[TRT] binding 2
-- index 2
-- name 'NMS_1'
-- type FP32
-- in/out OUTPUT
-- # dims 3
-- dim #0 1 (SPATIAL)
-- dim #1 1 (SPATIAL)
-- dim #2 1 (SPATIAL)
so you should be able to find their new names this way
jetson-inference doesn’t necessarily support arbitrary DNN architectures out of the box. You may have to adapt the pre/post-processing to match what your model expects:
is using onnx models from azure custom vision best? they have different models I can export into, like tensorflow and some sub categories of tensorflow
I haven’t used the models from Azure before, and all the ONNX models that I have used have been from PyTorch. That’s not to say that they’re any better than models trained with TensorFlow, it just means that is how I have the pre/post-processing setup in jetson-inference.