Converting from ONNX to TensorRT Fails when saving engine

Description

We have converted an object detection model from TensorFlow to ONNX, and now are trying to convert to TensorRT. The conversion fails with the following error:

    [TensorRT] WARNING: onnx2trt_utils.cpp:217: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:243: One or more weights outside the range of INT32 was clamped
[TensorRT] ERROR: Equal__2701:0: shape tensor must have type Int32.
[TensorRT] ERROR: Equal__2846:0: shape tensor must have type Int32.
[TensorRT] ERROR: Equal__1023:0: shape tensor must have type Int32.
[TensorRT] ERROR: Equal__1032:0: shape tensor must have type Int32.
[TensorRT] ERROR: Equal__1039:0: shape tensor must have type Int32.
[TensorRT] ERROR: Equal__1071:0: shape tensor must have type Int32.
[TensorRT] ERROR: Builder failed while analyzing shapes.
Engine <tensorrt.tensorrt.INetworkDefinition object at 0x7f7ecd3e68>
Traceback (most recent call last):
  File "getplan.py", line 20, in <module>
    eng.save_engine(engine, engine_name) 
  File "/srv/demo/Demos/scripts/tensorrt_uff/ai-scripts/engine.py", line 27, in save_engine
    buf = engine.serialize()
AttributeError: 'NoneType' object has no attribute 'serialize'

We also tried converting with ‘onnx2trt’ and got the error below:

----------------------------------------------------------------
Input filename:   model1.onnx
ONNX IR version:  0.0.7
Opset version:    12
Producer name:    tf2onnx
Producer version: 1.6.2
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
WARNING: ONNX model has a newer ir_version (0.0.7) than this parser was built against (0.0.6).
Parsing model
[2020-07-08 13:13:07 WARNING] /srv/demo/Demos/scripts/tensorrt_uff/ai-scripts/onnx2trt/onnx-tensorrt/onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[2020-07-08 13:13:07 WARNING] /srv/demo/Demos/scripts/tensorrt_uff/ai-scripts/onnx2trt/onnx-tensorrt/onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
While parsing node number 852 [Unsqueeze -> "MultipleGridAnchorGenerator/stack_1_Unsqueeze__1596:0"]:
ERROR: /srv/demo/Demos/scripts/tensorrt_uff/ai-scripts/onnx2trt/onnx-tensorrt/onnx2trt_utils.cpp:188 In function convertAxis:
[8] Assertion failed: axis >= 0 && axis < nbDims

We also tried conversion using trtexec, and are getting the following error:

[07/08/2020-09:42:10] [V] [TRT] ImporterContext.hpp:122: Registering layer: MultipleGridAnchorGenerator/Meshgrid_2/Tile for ONNX node: MultipleGridAnchorGenerator/Meshgrid_2/Tile
[07/08/2020-09:42:10] [V] [TRT] ImporterContext.hpp:97: Registering tensor: MultipleGridAnchorGenerator/Meshgrid_2/Tile:0 for ONNX tensor: MultipleGridAnchorGenerator/Meshgrid_2/Tile:0
[07/08/2020-09:42:10] [V] [TRT] ModelImporter.cpp:182: MultipleGridAnchorGenerator/Meshgrid_2/Tile [Tile] outputs: [MultipleGridAnchorGenerator/Meshgrid_2/Tile:0 -> ()], 
[07/08/2020-09:42:10] [V] [TRT] ModelImporter.cpp:103: Parsing node: MultipleGridAnchorGenerator/stack_1_Unsqueeze__1596 [Unsqueeze]
[07/08/2020-09:42:10] [V] [TRT] ModelImporter.cpp:119: Searching for input: MultipleGridAnchorGenerator/Meshgrid_2/Tile:0
[07/08/2020-09:42:10] [V] [TRT] ModelImporter.cpp:125: MultipleGridAnchorGenerator/stack_1_Unsqueeze__1596 [Unsqueeze] inputs: [MultipleGridAnchorGenerator/Meshgrid_2/Tile:0 -> ()], 
ERROR: onnx2trt_utils.cpp:185 In function convertAxis:
[8] Assertion failed: axis >= 0 && axis < nbDims
[07/08/2020-09:42:10] [E] Failed to parse onnx file
[07/08/2020-09:42:10] [E] Parsing model failed
[07/08/2020-09:42:10] [E] Engine creation failed
[07/08/2020-09:42:10] [E] Engine set up failed

Environment

TensorRT Version:

ii graphsurgeon-tf 7.1.0-1+cuda10.2 arm64 GraphSurgeon for TensorRT package

GPU Type: Jetson Nano Jetpack 4.4
Nvidia Driver Version: Jetson Nano Jetpack 4.4
CUDA Version: 10.2
CUDNN Version:
Operating System + Version:

Linux svetlana-desktop 4.9.140-tegra #1 SMP PREEMPT Wed Apr 8 18:10:49 PDT 2020 aarch64 aarch64 aarch64 GNU/Linux

Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 1.15

Any help would be appreciated.

Thank you

Svetlana

Hi @svetlana.podvalniuk,
Request you to share your onnx model and script so that we can assist you better.
Thanks!

Hi,

I can’t upload them here, what is the preferred way of sharing the model with you?

Thank you

Svetlana

You can share over IM.
Thanks!