Efficient NMS plugin to TensorRT engine at runtime


NMS Plugin Integration to ONNX->TensorRT engine


TensorRT Version: 8.0.1
GPU Type: Jetson NX
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version: 8.0.5
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): NA
TensorFlow Version (if applicable): NA
PyTorch Version (if applicable): 1.8.1
Baremetal or Container (if container which image + tag): NA

Relevant Files


Steps To Reproduce

I don’t have any tutorial to add a plugin to TensoRT engine while serializing and deserializing.

  1. ONNX model is exported with Opset 11
  2. ONNX outputs detections [B, NUM_BOX, 4] and scores [B, NUM_BOX, 2]
  3. TensorRT engine serialized for ONNX model
  4. I have efficientNMS plugin from TensorRT plugin
  5. I have to put that plugin to TensorRT Engine at runtime.

How do I do that, If you have any suggestion and tutorials to build plugin and register to tensorrt model while serializing or deserializing will helps a lot.


Please refer to below links related custom plugin implementation and sample:

While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.


@anil2 I recently managed to add this plugin to my ONNX model, check this out retinanet-tensorflow2.x/onnx_utils.py at master · srihari-humbarwadi/retinanet-tensorflow2.x · GitHub


Also following may be helpful to you TensorRT-support-for-Tensorflow-2-Object-Detection-Models/create_onnx.py at main · pskiran1/TensorRT-support-for-Tensorflow-2-Object-Detection-Models · GitHub

Thank you.

We have TensorRT/samples/python/tensorflow_object_detection_api at main · NVIDIA/TensorRT · GitHub available now, the above link will not work.