Description
I have an onnx model with a total of three operators, among which operator 1 and operator 2 are my custom plug-ins. When parsing in tensorrt, the values of output1 and output3 are both correct, but output2 is incorrect. But I copied the output pointer from the video memory with cudaMemcpy in enqueue() in Operator 2, and the printed value is correct.
I have used onnx.checker.check_model(model), it will report an error saying that there is no my custom plug-in. But I use print(model) to see the network is correct.
How can I debug this problem? Thanks.
Environment
TensorRT Version: 8.2 GA
GPU Type: rtx 3060
Nvidia Driver Version: 31.0.15.1702
CUDA Version: 11.4
CUDNN Version: 11.4
Operating System + Version: Windows 11
PyTorch Version (if applicable): 1.9.0+cu111