Inconsistent inference results between tensorrt 7 and tensorrt 8

When I use the same onnx converted to trt file, the inference results in tensorrt7 and tensorrt8 are not consistent, where tensorrt 8 has the correct result and tensorrt 7 appears to have more bbox, some of which have some position shift.
The model I am using is retinanet,what is the problem and why tensorrt7 and tensorrt8 behave differently on this model?
When I use the ssd model, the results of tensorrt7 and tensorrt8 are basically the same and correct.

TensorRT Version:
NVIDIA Driver Version: 470
CUDA Version: 10.2
CUDNN Version: 8.2.0

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging