Deploying Tensorflow Model on Jetson Xavier NX: onnx to tensorrt

Hello!
I’m trying to deploy a model from the Tensorflow 2 Object Detection API on a Nvidia Jetson Xavier NX running Jetpack 4.4.

The idea is to retrain the network (ssd_mobilenet_v2_320x320_coco17_tpu-8) with custom data on my PC with a RTX3090, convert the model to onnx using tf2onnx. The model in onnx format can then be used by the ros deep learning node of dusty-nv (GitHub - dusty-nv/ros_deep_learning: Deep learning inference nodes for ROS with support for NVIDIA Jetson TX1/TX2/Xavier and TensorRT).

The first steps and solving the UINT8 Problem using onnx-gs were straightforward. But when the parser tries to read the onnx model I have a problem with the Squeeze layer. I get the following error message:

[04/01/2021-20:54:37] [V] [TRT] ModelImporter.cpp:125: Resize__47 [Resize] inputs: [Transpose__38:0 -> (1, 3, 320, 320)], [const_empty_float__37 -> ()], [const_empty_float__37 -> ()], [Concat__46:0 -> (4)], 
[04/01/2021-20:54:37] [V] [TRT] ImporterContext.hpp:141: Registering layer: Resize__47 for ONNX node: Resize__47
[04/01/2021-20:54:37] [V] [TRT] ImporterContext.hpp:116: Registering tensor: Resize__47:0 for ONNX tensor: Resize__47:0
[04/01/2021-20:54:37] [V] [TRT] ModelImporter.cpp:179: Resize__47 [Resize] outputs: [Resize__47:0 -> (-1, -1, -1, -1)], 
[04/01/2021-20:54:37] [V] [TRT] ModelImporter.cpp:103: Parsing node: StatefulPartitionedCall/Preprocessor/ResizeImage/resize/Squeeze [Squeeze]
[04/01/2021-20:54:37] [V] [TRT] ModelImporter.cpp:119: Searching for input: Resize__47:0
[04/01/2021-20:54:37] [V] [TRT] ModelImporter.cpp:119: Searching for input: const_starts__3991
[04/01/2021-20:54:37] [V] [TRT] ModelImporter.cpp:125: StatefulPartitionedCall/Preprocessor/ResizeImage/resize/Squeeze [Squeeze] inputs: [Resize__47:0 -> (-1, -1, -1, -1)], [const_starts__3991 -> (1)], 
terminate called after throwing an instance of 'std::out_of_range'
  what():  Attribute not found: axes

I’ve read, that fixing the input might solve the problem. So I added this line to my onnx_graphsurgeon file (I wasn’t sure about the shape, so I tried [1,3,320,320] too):
graph.inputs[0].shape=[1,320,320,3]

Unfortunately the parser throws the same error.

  1. What can I do to solve this?
  2. Is there a better way to deploy a retrained model from the tensorflow 2 object detection api in a jetson environment?
  3. From the release notes of Tensorrt (Release Notes :: NVIDIA Deep Learning TensorRT Documentation) I get that it was only testet with tf1.15. Might this cause the problem, since I am using tf 2.4.1 on my machine.

I would really appreciate some guidance.

Hi,

We are checking this issue with the pre-trained model: ssd_mobilenet_v2_320x320_coco17_tpu-8.tar.gz
Will share more information with you later.

Thanks.

Hi,

We can reproduce this error with the pre-trained model shared above.

Root cause is that the axis value in Squeeze layer changes from attribute into weight after opset 13.
And onnx2tensorrt parser add this support recently, which is not available to the TensorRT v7.1.3 of Jetson.

To solve this, you can replace nvonnxparser with the GitHub latest.

$ sudo apt-get install -y protobuf-compiler libprotobuf-dev openssl libssl-dev libcurl4-openssl-dev
$ wget https://github.com/Kitware/CMake/releases/download/v3.13.5/cmake-3.13.5.tar.gz
$ tar xvf cmake-3.13.5.tar.gz
$ cd cmake-3.13.5/
$ ./bootstrap --system-curl
$ make -j$(nproc)
$ echo 'export PATH='${PWD}'/bin/:$PATH' >> ~/.bashrc
$ source ~/.bashrc
$ git clone https://github.com/onnx/onnx-tensorrt.git
$ cd onnx-tensorrt/
$ git submodule update --init --recursive
$ mkdir build && cd build
$ cmake ../
$ make -j
$ sudo cp libnvonnxparser.so.7.2.2 /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.7.1.3 
$ sudo ldconfig

However, there is another error related to Resize layer which we are checking right now.

Thanks.

Thank you very much! Until the problem with Tensorflow is fixed I will use PyTorch. There onnx seems to be better integrated.
Looking forward to more progress with the resize operation.