• Hardware Platform (Jetson / GPU) Any
• DeepStream Version 5.1
• TensorRT Version 7.*
I would like to use a SSD MobileNet V2 FPNLite 640x640
model originally trained in Tensorflow 2
.
1) Tensorflow 2 → ONNX
I’ve converted the TF2 saved_model
to ONNX
using the tools of this repository.
But the output layers are adapted to use EfficientNMS_TRT
op, which is not supported by TensorRT 7:
dez 07 18:12:02: ERROR: [TRT]: INVALID_ARGUMENT: getPluginCreator could not find plugin EfficientNMS_TRT version 1
dez 07 18:12:02: WARNING: [TRT]: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
dez 07 18:12:02: INFO: [TRT]: ModelImporter.cpp:135: No importer registered for op: EfficientNMS_TRT. Attempting to import as plugin.
dez 07 18:12:02: INFO: [TRT]: builtin_op_importers.cpp:3659: Searching for plugin: EfficientNMS_TRT, plugin_version: 1, plugin_namespace:
dez 07 18:12:02: ERROR: builtin_op_importers.cpp:3661 In function importFallbackPluginImporter:
dez 07 18:12:02: [8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
dez 07 18:12:02: ERROR: Failed to parse onnx file
dez 07 18:12:02: ERROR: failed to build network since parsing model errors.
dez 07 18:12:02: ERROR: failed to build network.
Does anyone knows some workaround? (while keeping both the ONNX->TensorRT conversion step and inference on Deepstream)
2) Tensorflow 2 → UFF
Also tried the UFF conversion method described here.
Despite be able to convert to a UFF file, I think conversion was not fully sucessful as I’m not able to use it on DS. It generates an assertion error on TensorRT/plugin/nmsPlugin/nmsPlugin.cpp,82
when creating the engine.
UFF conversion just works for models trained on Tensorflow 1?