Tensorflow Object Detection API model (.pb) to TensorRT

Hello, I am trying to convert my trained model (EfficientDet D0) from Tensorflow Object Detection API to TensorRT. A mentor from NVIDIA provided my a useful resource from TensorRT Github EfficientDet. I followed the README.md step by step, however, in the Create ONNX Graph (from the instructions). I have an error when executing the following command line:

python create_onnx.py --input_shape '1,512,512,3' --saved_model ./saved_model --onnx ./model.onnx

The error looks as follows:


INFO:tf2onnx.tf_utils:Computed 2 values for constant folding

INFO:tf2onnx.tfonnx:folding node using tf type=Select, name=StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/PadOrClipBoxList/Select_1

INFO:tf2onnx.tfonnx:folding node using tf type=Select, name=StatefulPartitionedCall/Postprocessor/BatchMultiClassNonMaxSuppression/PadOrClipBoxList/Select_8

INFO:tf2onnx.optimizer:Optimizing ONNX model

INFO:tf2onnx.optimizer:After optimization: BatchNormalization -45 (108->63), Cast -88 (197->109), Const -756 (1411->655), GlobalAveragePool +16 (0->16), Identity -109 (109->0), Less -1 (10->9), Mul -2 (170->168), Placeholder -2 (6->4), ReduceMean -16 (16->0), ReduceSum -4 (5->1), Reshape -89 (147->58), Shape -5 (29->24), Slice -9 (58->49), Split -1 (15->14), Squeeze +5 (95->100), Sub -4 (36->32), Transpose -681 (767->86), Unsqueeze -51 (188->137)

INFO:EfficientDetGraphSurgeon:TF2ONNX graph created successfully

[W] colored module is not installed, will not use colors when logging. To enable colors, please install the colored module: python3 -m pip install colored

[W] 'Shape tensor cast elision' routine failed with: None

[W] colored module is not installed, will not use colors when logging. To enable colors, please install the colored module: python3 -m pip install colored

[W] 'Shape tensor cast elision' routine failed with: None

[W] colored module is not installed, will not use colors when logging. To enable colors, please install the colored module: python3 -m pip install colored

[W] 'Shape tensor cast elision' routine failed with: None

[W] colored module is not installed, will not use colors when logging. To enable colors, please install the colored module: python3 -m pip install colored

[W] 'Shape tensor cast elision' routine failed with: None

INFO:EfficientDetGraphSurgeon:Graph was detected as TFOD

INFO:EfficientDetGraphSurgeon:ONNX graph input shape: [1, 512, 512, 3] [NHWC format detected]

Traceback (most recent call last):

  File "create_onnx.py", line 451, in <module>

    main(args)

  File "create_onnx.py", line 425, in main

    effdet_gs.update_preprocessor(args.input_shape)

  File "create_onnx.py", line 155, in update_preprocessor

    mean_val = -1 * np.expand_dims(np.asarray([0.485, 0.456, 0.406], dtype=np.float32), axis=(0, 2, 3))

  File "<__array_function__ internals>", line 6, in expand_dims

  File "/usr/local/lib/python3.6/dist-packages/numpy/lib/shape_base.py", line 577, in expand_dims

    if axis > a.ndim or axis < -a.ndim - 1:

TypeError: '>' not supported between instances of 'tuple' and 'int'

I tried to modifty the create_onnx.py in the line 155, however, i had no success. Then, I tried to use the pretrained model from efficientdet d0 pretrained model and same error occurs. Finally, I tried to change the order of the command line from ‘1,512,512,3’ to ‘1,3,512,512’, and I have no success.

I would appreciate any help with this problem. Thank you in advanced for all the support.

Hi,

Suppose you are using JetPack4.6.
Have you checkout the release/8.0 branch since JetPack4.6 uses TensorRT v8.0?

Thanks.

Hi,

According to the TensorRT Github EfficientDet: The workflow to convert an EfficientDet model is basically TensorFlow → ONNX → TensorRT, and so parts of this process require TensorFlow to be installed. If you are performing this conversion to run inference on the edge, such as for NVIDIA Jetson devices, it might be easier to do the ONNX conversion on a PC first.

In my case I tried to convert my trained_model.pb to onnx_model.onnx using my pc, not the Jetson TX2. For the second step, I will take the conversion of .onnx to tensorRT file but this will happened on the Jetson TX2.

Should I perform all the conversions (pb file->ONNX->TensorRT) in the Jetson TX2?

Thank you for all the help

Hi

You can do the first step on a desktop machine.
But since TX2 uses TensorRT 8.0, please check out the same branch for compatibility.

Thanks.