Layer BatchedNmsPlugin failed validation

Description

I’m trying to add a BatchedNmsPlugin after YoloV4 tiny model. My workflow is converting the darknet model to onnx, then using the onnx-surgeon to add the BatchedNmsPlugin. Finally, I want to use the trtexec to convert the onnx to tensorRT, but I got the following error.

Environment

TensorRT Version : 7.1.3
GPU Type : Jetson Xavier NX
Nvidia Driver Version :
CUDA Version : 10.2
CUDNN Version : 8
Operating System + Version : Ubuntu 18.04

Relevant Files

https://drive.google.com/file/d/17ce5rRwxaB3NxNBQLFQHC76hFAkO9Rxh/view?usp=sharing

Steps To Reproduce

sudo ./trtexec --onnx=modified.onnx

Hi @jack_gao,
Kindly allow access to your files.

Thanks!

OK! I’ve already allowed

Thanks!

Hi @jack_gao,
Looks like the issue is with your custom plugin implementation or registration.
Below links might help you in the same.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/samplePlugin

Thanks!

Hi @AakankshaS,

I was following How to use NMS with Pytorch model (that was converted to ONNX -> TensorRT) · Issue #795 · NVIDIA/TensorRT · GitHub advice.

Using onnx-graphsurgeon like

Estimating Depth with ONNX Models and Custom Layers Using NVIDIA TensorRT | NVIDIA Technical Blog .
It looks simple and easy. Now it get the error but I don’t know which step goes wrong. It looks like it will register from libnvinfer_plugin.so automatically, isn’t it?

Thanks!

OK! I’ve already allowed

Hi @jack_gao ,
Apologies for delayed response, are you still facing the issue?