I’m trying to convert my onnx model to TensorRT engine model but getting error for one unsupported layer.
[06/06/2020-14:42:34] [W] [TRT] onnx2trt_utils.cpp:217: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[06/06/2020-14:42:34] [W] [TRT] onnx2trt_utils.cpp:243: One or more weights outside the range of INT32 was clamped
[06/06/2020-14:42:34] [W] [TRT] ModelImporter.cpp:135: No importer registered for op: ReverseSequence. Attempting to import as plugin.
[06/06/2020-14:42:34] [I] [TRT] builtin_op_importers.cpp:3556: Searching for plugin: ReverseSequence, plugin_version: 001, plugin_namespace:
[06/06/2020-14:42:34] [E] [TRT] INVALID_ARGUMENT: getPluginCreator could not find plugin ReverseSequence version 001
ERROR: builtin_op_importers.cpp:3558 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found”
[06/06/2020-14:42:34] [E] Failed to parse onnx file
[06/06/2020-14:42:34] [E] Parsing model failed
[06/06/2020-14:42:34] [E] Engine creation failed
[06/06/2020-14:42:34] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec --onnx=/home/Downloads/model.onnx --shapes=input_1:32x3x244x244
I converted this onnx model to quantized onnx model using quantization tool.
Got this result:
[06/08/2020-15:04:42] [W] [TRT] onnx2trt_utils.cpp:217: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[06/08/2020-15:04:42] [W] [TRT] onnx2trt_utils.cpp:243: One or more weights outside the range of INT32 was clamped
[06/08/2020-15:04:42] [E] [TRT] onnx2trt_utils.cpp:391: Found unsupported datatype (2) when importing initializer: conv2d/Conv2D/ReadVariableOp:0_quantized
ERROR: ModelImporter.cpp:92 In function parseGraph:
[8] Assertion failed: convertOnnxWeights(initializer, &weights, ctx)
[06/08/2020-15:04:42] [E] Failed to parse onnx file
[06/08/2020-15:04:42] [E] Parsing model failed
[06/08/2020-15:04:42] [E] Engine creation failed
[06/08/2020-15:04:42] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec --onnx=/home/Downloads/quantized_model.onnx --shapes=input_1:32x3x244x244
Also, Can you help how can I write a custom plugin for this? I got through some tutorials on this they said to replace that operation with some other. But I don’t know which operation can become alternative for this reverse_sequence. Can you guide me in this?
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!