Tensorrt has incorrect output for networks with custom plugins

Description

I have an onnx model with a total of three operators, among which operator 1 and operator 2 are my custom plug-ins. When parsing in tensorrt, the values ​​of output1 and output3 are both correct, but output2 is incorrect. But I copied the output pointer from the video memory with cudaMemcpy in enqueue() in Operator 2, and the printed value is correct.
I have used onnx.checker.check_model(model), it will report an error saying that there is no my custom plug-in. But I use print(model) to see the network is correct.
How can I debug this problem? Thanks.

Environment

TensorRT Version: 8.2 GA
GPU Type: rtx 3060
Nvidia Driver Version: 31.0.15.1702
CUDA Version: 11.4
CUDNN Version: 11.4
Operating System + Version: Windows 11
PyTorch Version (if applicable): 1.9.0+cu111

image

Hi,
Please refer to below links related custom plugin implementation and sample:

While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.

Thanks!