Efficient NMS plugin to TensorRT engine at runtime

Description

NMS Plugin Integration to ONNX->TensorRT engine

Environment

TensorRT Version: 8.0.1
GPU Type: Jetson NX
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version: 8.0.5
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): NA
TensorFlow Version (if applicable): NA
PyTorch Version (if applicable): 1.8.1
Baremetal or Container (if container which image + tag): NA

Relevant Files

NA

Steps To Reproduce

I don’t have any tutorial to add a plugin to TensoRT engine while serializing and deserializing.
Steps:

  1. ONNX model is exported with Opset 11
  2. ONNX outputs detections [B, NUM_BOX, 4] and scores [B, NUM_BOX, 2]
  3. TensorRT engine serialized for ONNX model
  4. I have efficientNMS plugin from TensorRT plugin
  5. I have to put that plugin to TensorRT Engine at runtime.

How do I do that, If you have any suggestion and tutorials to build plugin and register to tensorrt model while serializing or deserializing will helps a lot.

Thanks

Hi,
Please refer to below links related custom plugin implementation and sample:

While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.

Thanks!

@anil2 I recently managed to add this plugin to my ONNX model, check this out retinanet-tensorflow2.x/onnx_utils.py at master · srihari-humbarwadi/retinanet-tensorflow2.x · GitHub

Hi,

Also following may be helpful to you https://github.com/pskiran1/TensorRT-support-for-Tensorflow-2-Object-Detection-Models/blob/main/create_onnx.py

Thank you.

We have TensorRT/samples/python/tensorflow_object_detection_api at main · NVIDIA/TensorRT · GitHub available now, the above link will not work.