TensorRT Version : 6.0.0.1 GPU Type : Nvidia A40 Nvidia Driver Version : 510.47.03 CUDA Version : 10.1 CUDNN Version : 8.2.2 Operating System + Version : ubuntu 18.04
Python Version (if applicable) : 3.6.8 TensorFlow Version (if applicable) :2.5.1 PyTorch Version (if applicable) : Baremetal or Container (if container which image + tag) :nvcr.io/nvidia/tensorflow 19.10-py3
TensorRT Version : 8.0.0.1 GPU Type : Nvidia A40 Nvidia Driver Version : 510.47.03 CUDA Version : 10.1 CUDNN Version : 8.2.2 Operating System + Version : ubuntu 18.04
Hi,
Please refer to below links related custom plugin implementation and sample:
While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.
[04/06/2022-04:05:14] [E] [TRT] ModelImporter.cpp:751: — End node —
[04/06/2022-04:05:14] [E] [TRT] ModelImporter.cpp:754: ERROR: builtin_op_importers.cpp:4951 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found, are the plugin name, version, and namespace correct?”
[04/06/2022-04:05:14] [E] Failed to parse onnx file
[04/06/2022-04:05:14] [I] Finish parsing network model
[04/06/2022-04:05:14] [E] Parsing model failed
[04/06/2022-04:05:14] [E] Failed to create engine from model.
[04/06/2022-04:05:14] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8400] # /usr/src/tensorrt/bin/trtexec --onnx=albert_best_model_cluener_efficientglobalpointer_tf32.onnx --verbose
Looks like you’re using an unsupported operator Mod. We recommend you please implement a custom plugin for it. Please refer to the below links related to custom plugin implementation and sample: