Issue converting models from TF2 model zoo to ONNX->TRT from Tensorrt OSS

Hello,
I am trying to convert the following models from TF2 model zoo to onnx using TensorRT/create_onnx.py at main · NVIDIA/TensorRT · GitHub provided by TensorRT OSS GitHub - NVIDIA/TensorRT: TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.

Faster R-CNN ResNet101 V1 640x640
Faster R-CNN ResNet101 V1 1024x1024
SSD ResNet50 V1 FPN 640x640 (RetinaNet50)

but facing the same error for all the above three models as given below:

INFO:tf2onnx.tf_utils:Computed 4 values for constant folding
INFO:tf2onnx.tfonnx:folding node using tf type=ExpandDims, name=StatefulPartitionedCall/Postprocessor/ExpandDims_2
INFO:tf2onnx.tfonnx:folding node using tf type=ConcatV2, name=StatefulPartitionedCall/MultiscaleGridAnchorGenerator/GridAnchorGenerator/concat
INFO:tf2onnx.tfonnx:folding node using tf type=Select, name=StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/PadOrClipBoxList/Select_1
INFO:tf2onnx.tfonnx:folding node using tf type=Select, name=StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/PadOrClipBoxList/Select_8
INFO:tf2onnx.tfonnx:folding node type=Range, name=StatefulPartitionedCall/Postprocessor/range
INFO:tf2onnx.optimizer:Optimizing ONNX model
INFO:tf2onnx.optimizer:After optimization: BatchNormalization -145 (148->3), Cast -656 (1254->598), Const -3386 (3931->545), Identity -66 (66->0), Mul -2 (220->218), ReduceSum -90 (91->1), Reshape -89 (292->203), Shape -91 (195->104), Slice -1 (293->292), Split -2 (13->11), Squeeze -2 (214->212), Sub -90 (201->111), Transpose -607 (631->24), Unsqueeze -101 (228->127)
INFO:ModelHelper:TF2ONNX graph created successfully
[W] colored module is not installed, will not use colors when logging. To enable colors, please install the colored module: python3 -m pip install colored
[W] ‘Shape tensor cast elision’ routine failed with: None
INFO:ModelHelper:Model is ssd_resnet101_v1_fpn_keras
INFO:ModelHelper:Height is 640
INFO:ModelHelper:Width is 640
INFO:ModelHelper:First NMS score threshold is 9.99999993922529e-09
INFO:ModelHelper:First NMS iou threshold is 0.6000000238418579
INFO:ModelHelper:First NMS max proposals is 100
INFO:ModelHelper:ONNX graph input shape: [1, 640, 640, 3] [NCHW format set]
INFO:ModelHelper:Found Conv node ‘StatefulPartitionedCall/ResNet101V1_FPN/functional_1/conv1_conv/Conv2D’ as stem entry
Traceback (most recent call last):
File “create_onnx.py”, line 671, in
main(args)
File “create_onnx.py”, line 646, in main
effdet_gs.update_preprocessor(args.batch_size, args.input_format)
File “create_onnx.py”, line 258, in update_preprocessor
tile_node.outputs =
AttributeError: ‘NoneType’ object has no attribute ‘outputs’

Environment

TensorRT: 8.2.1.8 :
NVIDIA GPU: GTX 1060 6GB :
NVIDIA Driver Version: 470.57.02 :
CUDA Version: 11.4 :
CUDNN Version: 8.2.2.26 :
Operating System: ubuntu 18.04 :
Python Version (if applicable): 3.6.9 :
Tensorflow Version (if applicable): 2.5.1 : :

please Help!!!
Regards

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi,

We recommend you to please post your concern on Issues · NVIDIA/TensorRT · GitHub to get better help.

Thank you.