Onnx model to TRT conversion error

Description

I’m trying to convert my onnx model to TensorRT engine model but getting error for one unsupported layer.

[06/06/2020-14:42:34] [W] [TRT] onnx2trt_utils.cpp:217: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[06/06/2020-14:42:34] [W] [TRT] onnx2trt_utils.cpp:243: One or more weights outside the range of INT32 was clamped
[06/06/2020-14:42:34] [W] [TRT] ModelImporter.cpp:135: No importer registered for op: ReverseSequence. Attempting to import as plugin.
[06/06/2020-14:42:34] [I] [TRT] builtin_op_importers.cpp:3556: Searching for plugin: ReverseSequence, plugin_version: 001, plugin_namespace:
[06/06/2020-14:42:34] [E] [TRT] INVALID_ARGUMENT: getPluginCreator could not find plugin ReverseSequence version 001
ERROR: builtin_op_importers.cpp:3558 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found”
[06/06/2020-14:42:34] [E] Failed to parse onnx file
[06/06/2020-14:42:34] [E] Parsing model failed
[06/06/2020-14:42:34] [E] Engine creation failed
[06/06/2020-14:42:34] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec --onnx=/home/Downloads/model.onnx --shapes=input_1:32x3x244x244

Relevant Files

Here is the onnx model file attached:
https://drive.google.com/file/d/1T4cHLysFSX6pXQLYKeXpPaAu0__n_c0X/view?usp=sharing

Can you tell me what’s wrong in this model? What I can do to convert this into TensorRT engine ?

ReverseSequence op is currently not supported in TRT. Please refer to below link for more details.
https://github.com/onnx/onnx-tensorrt/blob/master/operators.md

You need to create a custom plugin for any unsupported layer.

Thanks

I converted this onnx model to quantized onnx model using quantization tool.
Got this result:

[06/08/2020-15:04:42] [W] [TRT] onnx2trt_utils.cpp:217: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[06/08/2020-15:04:42] [W] [TRT] onnx2trt_utils.cpp:243: One or more weights outside the range of INT32 was clamped
[06/08/2020-15:04:42] [E] [TRT] onnx2trt_utils.cpp:391: Found unsupported datatype (2) when importing initializer: conv2d/Conv2D/ReadVariableOp:0_quantized
ERROR: ModelImporter.cpp:92 In function parseGraph:
[8] Assertion failed: convertOnnxWeights(initializer, &weights, ctx)
[06/08/2020-15:04:42] [E] Failed to parse onnx file
[06/08/2020-15:04:42] [E] Parsing model failed
[06/08/2020-15:04:42] [E] Engine creation failed
[06/08/2020-15:04:42] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec --onnx=/home/Downloads/quantized_model.onnx --shapes=input_1:32x3x244x244

Any inputs to this? Is this the right approach??

Also, Can you help how can I write a custom plugin for this? I got through some tutorials on this they said to replace that operation with some other. But I don’t know which operation can become alternative for this reverse_sequence. Can you guide me in this?

Please refer to below link:
https://github.com/NVIDIA/TensorRT/issues/6#issuecomment-570290621
https://github.com/NVIDIA/TensorRT/issues/6#issuecomment-603683069

Thanks

Also getting the similar issue.
TRT 7.0.0
TF 1.15
Used tf2onnx for conversion to ONNX format
ssd_mobilenet_v2_coco_2018_03_115.onnx (66.1 MB)

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!