I’m trying to retrain a mobilenet model and to run it on jetson-inference.
I follow this link: tensorflow-onnx/ConvertingSSDMobilenetToONNX.ipynb at master · onnx/tensorflow-onnx · GitHub to convert the saved_model to onnx.
I could do that successfully. But when I tried to run that using detecnet in my jetson nano, I get the error " Unsupported ONNX data type: UINT8"
That I could fix with the following conversion:
import onnx_graphsurgeon as gs
import numpy as np
graph = gs.import_onnx(onnx.load(“model.onnx”))
for inp in graph.inputs:
inp.dtype = np.float32
So after that, when I run ‘python3 detectnet.py “images/*.jpeg” result_30/found_%i.jpeg --network=updated_model.onnx --threshold=0.3’
[TRT] ModelImporter.cpp:125: Resize__159 [Resize] inputs: [Transpose__144:0 → (1, 3, -1, -1)], [Concat__158:0 → (4)],
[TRT] ImporterContext.hpp:141: Registering layer: Resize__159 for ONNX node: Resize__159
ERROR: builtin_op_importers.cpp:2549 In function importResize:
 Assertion failed: scales.is_weights() && “Resize scales must be an initializer!”
[TRT] failed to parse ONNX model ‘updated_model.onnx’
[TRT] device GPU, failed to load updated_model.onnx
[TRT] detectNet – failed to initialize.
jetson.inference – detectNet failed to load network
Is there a way to load this external model successfully?
Loading the onnx with tf, works, but it takes too much time to do the inference.