Hello,
I’m using a custom plug-in for the non-maximum suppression when converting object detection models to TensorRT via ONNX. Therefore, I added a new node to TensorRT’s ONNX parser that creates the plug-in layer. To convert the model, I use the onnx2trt tool compiled with my adjusted version of TensorRT’s ONNX parser. Since updating from TensorRT 6 to 7, the following error appears when I try to deserialize the converted engine.
[TensorRT] ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin NMS_TRT_ONNX version 1
[TensorRT] ERROR: safeDeserializationUtils.cpp (323) - Serialization Error in load: 0 (Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
However, I imported the .so file including my plug-in with LD_PRELOAD and use REGISTER_TENSORRT_PLUGIN to register the plug-in. If I execute the following code prior to deserializing the engine, the correct plug-in is found.
creator = trt.get_plugin_registry().get_plugin_creator(‘NMS_TRT_ONNX’, ‘1’)
print(creator.name)
print(creator.plugin_namespace)
print(creator.plugin_version)
Output:
NMS_TRT_ONNX
1
Why doesn’t TensorRT find the plugin even if it is in the registry? Does TensorRT’s Python bindings use a different registry than the deserialize_cuda_engine function?
The issue appears on a GTX 1080 Ti with TensorRT 7.1.3 and on a Jetson Nano with JetPack 4.4.
Thanks in advance for any help.
Environment
TensorRT Version: 7.1.3
GPU Type: Maxwell (Jetson Nano)
Nvidia Driver Version: JetPack 4.4
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System + Version: JetPack 4.4
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): Baremetal