Exporting Tensorflow models to Jetson Nano

Hi, Elviron

The root cause is onnx expects input image to be INT8 but TensorRT use Float32.
To solve this issue, you can modify the input data format of ONNX with our graphsurgeon API directly.

1. Install ONNX Graphsurgeon API

$ sudo apt-get install python3-pip libprotobuf-dev protobuf-compiler
$ git clone https://github.com/NVIDIA/TensorRT.git
$ cd TensorRT/tools/onnx-graphsurgeon/
$ make install

2. Modify your model

import onnx_graphsurgeon as gs
import onnx
import numpy as np

graph = gs.import_onnx(onnx.load("model.onnx"))
for inp in graph.inputs:
    inp.dtype = np.float32

onnx.save(gs.export_onnx(graph), "updated_model.onnx")

Thanks.

1 Like