Layer BatchedNmsPlugin failed validation

Description

I’m trying to add a BatchedNmsPlugin after YoloV4 tiny model. My workflow is converting the darknet model to onnx, then using the onnx-surgeon to add the BatchedNmsPlugin. Finally, I want to use the trtexec to convert the onnx to tensorRT, but I got the following error.

Environment

TensorRT Version : 7.1.3
GPU Type : Jetson Xavier NX
Nvidia Driver Version :
CUDA Version : 10.2
CUDNN Version : 8
Operating System + Version : Ubuntu 18.04

Relevant Files

Steps To Reproduce

sudo ./trtexec --onnx=modified.onnx

Hi @jack_gao,
Kindly allow access to your files.

Thanks!

OK! I’ve already allowed

Thanks!

Hi @jack_gao,
Looks like the issue is with your custom plugin implementation or registration.
Below links might help you in the same.



Thanks!

Hi @AakankshaS,

I was following https://github.com/NVIDIA/TensorRT/issues/795#issuecomment-697445231 advice.

Using onnx-graphsurgeon like


https://developer.nvidia.com/blog/estimating-depth-beyond-2d-using-custom-layers-on-tensorrt-and-onnx-models/ .
It looks simple and easy. Now it get the error but I don’t know which step goes wrong. It looks like it will register from libnvinfer_plugin.so automatically, isn’t it?

Thanks!

OK! I’ve already allowed