TensorRT onnx parser + BatchedNMSDynamicPlugin

Hello, we are using the Yolov5 model, we store it in ONNX. In our project, we use the TensorRT ONNX parser to parse it. Post-processing is now done on OpenCV on CPU.
We want to change the post-processing to BatchedNMSPlugin so that it connects to the output of the ONNX parser.
How can you do this?

Environment

TensorRT Version:7.2.3.4
GPU Type: RTX3060
Nvidia Driver Version:
CUDA Version: 11.1
CUDNN Version:
Operating System + Version: linux ubuntu 20.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,

Hope following will be helpful to you.
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#extending

Thank you.