[8] No importer registered for op: If

tensorrt7.0 调用 onnx文件 时报错
ERROR: ModelImporter.cpp:134 In function parseGraph:
[8] No importer registered for op: If

Hi,

Can you provide the following information so we can better help?
Provide details on the platforms you are using:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow and PyTorch version
o TensorRT version

Also, if possible please share the script & model file to reproduce the issue.

Meanwhile, please try latest ONNX opset version. You can also try trtexec cmd to test and debug your model:

https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

Thanks

&&&& RUNNING TensorRT.smoke # C:\code\sy2020\TensorRtSmoke\x64\Debug\TensorRtCoil.exe
[02/04/2020-11:38:52] [I] Building and running a GPU inference engine for Onnx MNIST

Input filename: C:/code/tenflow/yolov3_smoke/data/darknet_weights/yolov3_tf.onnx
ONNX IR version: 0.0.6
Opset version: 10
Producer name: tf2onnx
Producer version: 1.5.5
Domain:
Model version: 0
Doc string:

[02/04/2020-11:38:57] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/04/2020-11:38:57] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/04/2020-11:38:57] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
While parsing node number 4 [If]:
ERROR: ModelImporter.cpp:134 In function parseGraph:
[8] No importer registered for op: If
&&&& FAILED TensorRT.smoke # C:\code\sy2020\TensorRtSmoke\x64\Debug\TensorRtCoil.exe

2020-03-04 11:42:39,109 - INFO - Using tensorflow=1.14.0, onnx=1.6.0, tf2onnx=1.5.5/3c8f90
2020-03-04 11:42:39,109 - INFO - Using opset <onnx, 10>
2020-03-04 11:42:46,537 - INFO - Optimizing ONNX model
2020-03-04 11:43:50,925 - INFO - After optimization: Const -301 (679->378), Identity -579 (651->72), Transpose -227 (447->220)
2020-03-04 11:43:51,345 - INFO -
2020-03-04 11:43:51,345 - INFO - Successfully converted TensorFlow model yolov3smoke.pb to ONNX
2020-03-04 11:43:51,727 - INFO - ONNX model is saved at yolov3_tf.onnx

onnx file test and verify by onnxruntime is ok

Hi,

Can you share the model file so we can help better?

Thanks

C:\tools\TensorRT-7.0.0.11\bin>trtexec.exe --onnx=C:\code\tenflow\yolov3_smoke\data\darknet_weights\yolov3_tf.onnx
&&&& RUNNING TensorRT.trtexec # trtexec.exe --onnx=C:\code\tenflow\yolov3_smoke\data\darknet_weights\yolov3_tf.onnx
[03/04/2020-15:22:26] [I] === Model Options ===
[03/04/2020-15:22:26] [I] Format: ONNX
[03/04/2020-15:22:26] [I] Model: C:\code\tenflow\yolov3_smoke\data\darknet_weights\yolov3_tf.onnx
[03/04/2020-15:22:26] [I] Output:
[03/04/2020-15:22:26] [I] === Build Options ===
[03/04/2020-15:22:26] [I] Max batch: 1
[03/04/2020-15:22:26] [I] Workspace: 16 MB
[03/04/2020-15:22:26] [I] minTiming: 1
[03/04/2020-15:22:26] [I] avgTiming: 8
[03/04/2020-15:22:26] [I] Precision: FP32
[03/04/2020-15:22:26] [I] Calibration:
[03/04/2020-15:22:26] [I] Safe mode: Disabled
[03/04/2020-15:22:26] [I] Save engine:
[03/04/2020-15:22:26] [I] Load engine:
[03/04/2020-15:22:26] [I] Inputs format: fp32:CHW
[03/04/2020-15:22:26] [I] Outputs format: fp32:CHW
[03/04/2020-15:22:26] [I] Input build shapes: model
[03/04/2020-15:22:26] [I] === System Options ===
[03/04/2020-15:22:26] [I] Device: 0
[03/04/2020-15:22:26] [I] DLACore:
[03/04/2020-15:22:26] [I] Plugins:
[03/04/2020-15:22:26] [I] === Inference Options ===
[03/04/2020-15:22:26] [I] Batch: 1
[03/04/2020-15:22:26] [I] Iterations: 10
[03/04/2020-15:22:26] [I] Duration: 3s (+ 200ms warm up)
[03/04/2020-15:22:26] [I] Sleep time: 0ms
[03/04/2020-15:22:26] [I] Streams: 1
[03/04/2020-15:22:26] [I] ExposeDMA: Disabled
[03/04/2020-15:22:26] [I] Spin-wait: Disabled
[03/04/2020-15:22:26] [I] Multithreading: Disabled
[03/04/2020-15:22:26] [I] CUDA Graph: Disabled
[03/04/2020-15:22:26] [I] Skip inference: Disabled
[03/04/2020-15:22:26] [I] Input inference shapes: model
[03/04/2020-15:22:26] [I] Inputs:
[03/04/2020-15:22:26] [I] === Reporting Options ===
[03/04/2020-15:22:26] [I] Verbose: Disabled
[03/04/2020-15:22:26] [I] Averages: 10 inferences
[03/04/2020-15:22:26] [I] Percentile: 99
[03/04/2020-15:22:26] [I] Dump output: Disabled
[03/04/2020-15:22:26] [I] Profile: Disabled
[03/04/2020-15:22:26] [I] Export timing to JSON file:
[03/04/2020-15:22:26] [I] Export output to JSON file:
[03/04/2020-15:22:26] [I] Export profile to JSON file:
[03/04/2020-15:22:26] [I]

Input filename: C:\code\tenflow\yolov3_smoke\data\darknet_weights\yolov3_tf.onnx
ONNX IR version: 0.0.5
Opset version: 10
Producer name: tf2onnx
Producer version: 1.5.5
Domain:
Model version: 0
Doc string:

[03/04/2020-15:22:29] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[03/04/2020-15:22:29] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[03/04/2020-15:22:29] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
While parsing node number 4 [If]:
ERROR: ModelImporter.cpp:134 In function parseGraph:
[8] No importer registered for op: If
[03/04/2020-15:22:29] [E] Failed to parse onnx file
[03/04/2020-15:22:29] [E] Parsing model failed
[03/04/2020-15:22:29] [E] Engine creation failed
[03/04/2020-15:22:29] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # trtexec.exe --onnx=C:\code\tenflow\yolov3_smoke\data\darknet_weights\yolov3_tf.onnx

and file can download from the link:

链接:百度网盘 请输入提取码
提取码:4478

Hi,

“If” operator is currently not supported in TRT 7. Please refer below link for more details:
https://github.com/onnx/onnx-tensorrt/blob/master/operators.md

You need to implement custom plugin for unsupported layer.

Thanks

ok,hope nvidaia publish support these normal operators