TensorRT costomized plug-in affect TensorflowLite model accuracy

Description

My program contains two modules, one is object detection which is implemented by TensorRT and YOLOV5. I use the following github GitHub - wang-xinyu/tensorrtx: Implementation of popular deep learning networks with TensorRT network definition API. Aother is OCR recognition which is implemented by TensorflowLite. I found a serious problem that using costomized plug-in of TensorRT by REGISTER_TENSORRT_PLUGIN to register the plug-in would affect the TensorflowLite model accuracy by 20%. And the TensorflowLite is just using CPU to compute and no customized operator. I just comment the code REGISTER_TENSORRT_PLUGIN, then the accuracy of the TensorflowLite model will up to 98%(this is the right accuracy when ocr module Individual test), otherwise is 77%.

This problem has confused me for a long time, could you give me some advice?

Environment

TensorRT Version: 7.2.1
GPU Type: GTX1070
CUDA Version: 10.2
CUDNN Version: 8.0.4
Operating System + Version: Ubuntu 18.04
Program Language: C++
TensorFlowLite Version: 2.2.0

Hi, Request you to share the model and script so that we can try reproducing the issue at our end.

Also we recommend you to check the below samples links, as they might answer your concern

Thanks!

Thanks for your rapid reply. I’m sorry that I can not supply the models. Could you tell which may lead to this problem? Or could you tell me the theory of REGISTER_TENSORRT_PLUGIN?

Thanks very much!

Hi @jiangnan,
Could you please refer to the below document for the same

Thanks!

OK, I will read it and try to find my answer. Thanks very much.