Problem converting TensorFlow 2-> ONNX model to TensorRT Engine (efficientdet_d0)

Description

  1. I train the model efficientdet_d0_coco17_tpu-32 in TensorFlow 2 (Windows 10). It’s work.
  2. Then I convert The model to ONNX:
python -m tf2onnx.convert --saved-model C:\rr_vagon\output\saved_model --output model.onnx --opset 12
  1. Then I update the model:
python modify_onnx.py model.onnx u_model.onnx
import onnx_graphsurgeon as gs
import onnx
import numpy as np
import sys

model = onnx.load(sys.argv[1])
onnx.checker.check_model(model)
graph = gs.import_onnx(model)

for inp in graph.inputs:
    inp.dtype = np.float32
    
onnx.save(gs.export_onnx(graph), sys.argv[2])
  1. But when I run:
trtexec.exe --onnx=u_model.onnx --batch=1 --saveEngine=efficientdet_d0.trt --explicitBatch

then I get:

Input filename:   u_model.onnx
ONNX IR version:  0.0.7
Opset version:    12
Producer name:
Producer version:
Domain:
Model version:    0
Doc string:
----------------------------------------------------------------
[07/21/2021-15:20:54] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: builtin_op_importers.cpp:1601 In function importIf:
[8] Assertion failed: cond.is_weights() && cond.weights().count() == 1 && "If condition must be a initializer!"
[07/21/2021-15:20:54] [E] Failed to parse onnx file
[07/21/2021-15:20:54] [E] Parsing model failed
[07/21/2021-15:20:54] [E] Engine creation failed
[07/21/2021-15:20:54] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # trtexec.exe --onnx=u_model.onnx --batch=1 --saveEngine=efficientdet_d0.trt --explicitBatch

If I use --verbose:

nsor: map/while/Preprocessor/ResizeToRange/strided_slice__75:0
[07/21/2021-15:22:51] [V] [TRT] ModelImporter.cpp:179: map/while/Preprocessor/ResizeToRange/strided_slice__75 [Squeeze] outputs: [map/while/Preprocessor/ResizeToRange/strided_slice__75:0 -> ()],
[07/21/2021-15:22:51] [V] [TRT] ModelImporter.cpp:103: Parsing node: map/while/Preprocessor/ResizeToRange/Less__76 [Cast]
[07/21/2021-15:22:51] [V] [TRT] ModelImporter.cpp:119: Searching for input: map/while/Preprocessor/ResizeToRange/strided_slice__75:0
[07/21/2021-15:22:51] [V] [TRT] ModelImporter.cpp:125: map/while/Preprocessor/ResizeToRange/Less__76 [Cast] inputs: [map/while/Preprocessor/ResizeToRange/strided_slice__75:0 -> ()],
[07/21/2021-15:22:51] [V] [TRT] builtin_op_importers.cpp:320: Casting to type: float32
[07/21/2021-15:22:51] [V] [TRT] ImporterContext.hpp:154: Registering layer: map/while/Preprocessor/ResizeToRange/Less__76 for ONNX node: map/while/Preprocessor/ResizeToRange/Less__76
[07/21/2021-15:22:51] [V] [TRT] ImporterContext.hpp:120: Registering tensor: map/while/Preprocessor/ResizeToRange/Less__76:0 for ONNX tensor: map/while/Preprocessor/ResizeToRange/Less__76:0
[07/21/2021-15:22:51] [V] [TRT] ModelImporter.cpp:179: map/while/Preprocessor/ResizeToRange/Less__76 [Cast] outputs: [map/while/Preprocessor/ResizeToRange/Less__76:0 -> ()],
[07/21/2021-15:22:51] [V] [TRT] ModelImporter.cpp:103: Parsing node: map/while/Preprocessor/ResizeToRange/Less [Less]
[07/21/2021-15:22:51] [V] [TRT] ModelImporter.cpp:119: Searching for input: map/while/Preprocessor/ResizeToRange/Less__76:0
[07/21/2021-15:22:51] [V] [TRT] ModelImporter.cpp:119: Searching for input: map/while/Preprocessor/ResizeToRange/Less__77:0
[07/21/2021-15:22:51] [V] [TRT] ModelImporter.cpp:125: map/while/Preprocessor/ResizeToRange/Less [Less] inputs: [map/while/Preprocessor/ResizeToRange/Less__76:0 -> ()], [map/while/Preprocessor/ResizeToRange/Less__77:0 -> ()],
[07/21/2021-15:22:51] [V] [TRT] ImporterContext.hpp:154: Registering layer: map/while/Preprocessor/ResizeToRange/Less for ONNX node: map/while/Preprocessor/ResizeToRange/Less
[07/21/2021-15:22:51] [V] [TRT] ImporterContext.hpp:120: Registering tensor: map/while/Preprocessor/ResizeToRange/Less:0 for ONNX tensor: map/while/Preprocessor/ResizeToRange/Less:0
[07/21/2021-15:22:51] [V] [TRT] ModelImporter.cpp:179: map/while/Preprocessor/ResizeToRange/Less [Less] outputs: [map/while/Preprocessor/ResizeToRange/Less:0 -> ()],
[07/21/2021-15:22:51] [V] [TRT] ModelImporter.cpp:103: Parsing node: map/while/Preprocessor/ResizeToRange/cond [If]
[07/21/2021-15:22:51] [V] [TRT] ModelImporter.cpp:119: Searching for input: map/while/Preprocessor/ResizeToRange/Less:0
[07/21/2021-15:22:51] [V] [TRT] ModelImporter.cpp:125: map/while/Preprocessor/ResizeToRange/cond [If] inputs: [map/while/Preprocessor/ResizeToRange/Less:0 -> ()],
ERROR: builtin_op_importers.cpp:1601 In function importIf:
[8] Assertion failed: cond.is_weights() && cond.weights().count() == 1 && "If condition must be a initializer!"
[07/21/2021-15:22:51] [E] Failed to parse onnx file
[07/21/2021-15:22:51] [E] Parsing model failed
[07/21/2021-15:22:51] [E] Engine creation failed
[07/21/2021-15:22:51] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # trtexec.exe --verbose --onnx=u_model.onnx --batch=1 --saveEngine=efficientdet_d0.trt --explicitBatch

What is the problem?

Environment

TensorRT Version: TensorRT-7.2.3.4 (but also use TensorRT-7.2.1.6)
GPU Type: 2080 Super
Nvidia Driver Version: 461.33
CUDA Version: Install 11.1 and 11.2
CUDNN Version: 8.1.1.33 for TensorRT-7.2.3.4 (8.0.5.39 for TensorRT-7.2.1.6)
Operating System + Version: Windows 10
Python Version (if applicable): 3.9.2
TensorFlow Version (if applicable): 2.5.0
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

  1. done
  2. done
    Log:
    log (112.3 KB)

Hi @intbusoft,

We recommend you to please try on latest TensorRT version.

Actually, the example for EfficinetDet is released in our OSS:

If you still face this issue, please share us onnx model and trtexec verbose logs.

Thank you.

Thank you.
In TensorRT 8 next error:

[07/22/2021-17:23:04] [E] [TRT] ModelImporter.cpp:726: ERROR: builtin_op_importers.cpp:4643 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"

The link you proposed will not work, because the procedure needs to be done in Windows.

Verbose log:
log.txt (167.8 KB)

ONNX model:
u_model.onnx (16.3 MB)

TensorFlow pipeline:
pipeline.config (4.3 KB)

@intbusoft,

Based on the verbose logs looks like you’re using unsupported op like Round. Please implement custom plugin for the same.
Check TRT supported op here onnx-tensorrt/operators.md at main · onnx/onnx-tensorrt · GitHub

Please find samples to implement custom plugin.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleOnnxMnistCoordConvAC

Thanks!

This is strange:

TensorRT 8.0 supports operators up to Opset 13.

But I used Opset 12

Ok. I understood what the problem is. Thanks

I too have the same problem. how did you solve it?