ONNX Plugin Layer implements

Description

Hi,
I have downloaded ssd.onnx and run with onnx parser to generate .trt file.
But I met this error

While parsing node number 464 [NonMaxSuppression]:
ERROR: ModelImporter.cpp:134 In function parseGraph:
[8] No importer registered for op: NonMaxSuppression
&&&& FAILED TensorRT.sample_onnx_mnist # ./test_onnx

It seems that NonMaxSuppressionPlugin is not available.
Then I tried to implement plugin for custom layer.
I follow nmsPlugin example and create a NonMaxSuppressionPlugin class.

  1. Make directory NonMaxSuppressionPlugin in plugin folder
  2. Copy nmsPlugin.cpp and nmsPlugin.h to folder NonMaxSuppressionPlugin
  3. Change nmsPlugin and nmsPluginCreator to NonMaxSuppressionPlugin and NonMaxSuppressionPluginCreator.
  4. Add into CMake list
  5. Add DEFINE_BUILTIN_OP_IMPORTER(NonMaxSuppression) in parsers/onnx/builtin_op_importers.cpp
  6. Add initializePlugin< nvinfer1::plugin::NonMaxSuppressionPluginCreator >(logger, libNamespace) in InferPlugin.cpp
  7. Regenerate the in build folder by “cmake … -DTRT_LIB_DIR=~/TensorRT/lib -DTRT_BIN_DIR=pwd/out -DBUILD_PLUGINS=ON -DBUILD_PARSERS=ON”
  8. make in build folder
  9. make install

After rebuild I used onnx2trt ssd-10.onnx -o ssd_trt.out to convert model
It gives message :

Input filename:   ../../../samples/data/ssd-10.onnx
ONNX IR version:  0.0.4
Opset version:    10
Producer name:    pytorch
Producer version: 1.1
Domain:
Model version:    0
Doc string:
----------------------------------------------------------------
Parsing model
[2020-12-16 06:36:24 WARNING] /home/u5393118/TensorRT/parsers/onnx/onnx2trt_utils.cpp:235: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
.....
[2020-12-16 06:36:24 WARNING] /home/u5393118/TensorRT/parsers/onnx/onnx2trt_utils.cpp:261: One or more weights outside the range of INT32 was clamped
[2020-12-16 06:36:24   ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin NonMaxSuppressionONNXTRT_NAMESPACE version 001

I’ not sure if I miss something in register the NonMaxSuppressionPlugin in to ONNX parser.
Is any implementation example in ONNX plugin?

Environment

TensorRT Version : 7.0.0-1
GPU Type : Tesla V100
Nvidia Driver Version : 450.51.05
CUDA Version : 11.0
CUDNN Version :
Operating System + Version : ubuntu 18.04
Python Version (if applicable) : 3.6.9
TensorFlow Version (if applicable) :
PyTorch Version (if applicable) :
Baremetal or Container (if container which image + tag) :

Relevant Files

ONNX model is downloaded from https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/ssd

Steps To Reproduce

  1. Make directory NonMaxSuppressionPlugin in plugin folder
  2. Copy nmsPlugin.cpp and nmsPlugin.h to folder NonMaxSuppressionPlugin
  3. Change nmsPlugin and nmsPluginCreator to NonMaxSuppressionPlugin and NonMaxSuppressionPluginCreator.
  4. Add into CMake list
  5. Add DEFINE_BUILTIN_OP_IMPORTER(NonMaxSuppression) in parsers/onnx/builtin_op_importers.cpp
  6. Add initializePlugin< nvinfer1::plugin::NonMaxSuppressionPluginCreator >(logger, libNamespace) in InferPlugin.cpp
  7. Regenerate the in build folder by “cmake … -DTRT_LIB_DIR=~/TensorRT/lib -DTRT_BIN_DIR=pwd/out -DBUILD_PLUGINS=ON -DBUILD_PARSERS=ON”
  8. make in build folder
  9. make install
  10. in build/parsers/onnx/ run onnx2trt ssd-10.onnx -o ssd.trt

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi @disculus2012,
Please refer to below samples:


Hi, Request you to check the below reference links for custom plugin implementation.


Thanks!

@disculus2012

Did you ever get this solved?

1 Like

I have solved this problem by replacing plugin.so and parser.so in /usr/lib/x86_64-linux-gnu/ from ${your_path}/TensorRT/lib.

cp ${your_path}/TensorRT/lib/target.so /usr/lib/x86_64-linux-gnu/

@disculus2012 that is the exact command that you used to solve this?

“cp ${your_path}/TensorRT/lib/target.so /usr/lib/x86_64-linux-gnu/”

The problem is that I didn’t update the new shared library I built.
So the old shared library doesn’t contain the new plugin importer in ONNX parser.
I’m using 7.0.0 version TensorRT so the plugin is 7.0.0
I update two .so files by

cp ~/TensorRT/lib/libnvinfer_plugin.so.7.0.0 /usr/lib/x86_64-linux-gnu/

cp ~/TensorRT/lib/libnvonnxparser.so.7.0.0 /usr/lib/x86_64-linux-gnu/

The ~/TensorRT/lib/libnvonnxparser.so.7.0.0 may be different in your system.
You can check in your TensorRT folder.

Okay, I thought just copying libraries would solve the lack of a NMS plugin. Does your plugin work for multiple models?

Sorry,
I’m still working on the plugin process part.

1 Like

lol @klinten

someone also replied recently to me asking how to solve this, maybe it helps

Thanks for sharing.

1 Like

You’re welcome. Please drop by this thread again and let me know if you’ve made progress on getting a plugin working.