TensorRT, result error in fp16

Description

TensorRT result error, result is related to the previous input in FP16 model. float32 model is OK.

Environment

TensorRT Version: TensorRT-8.2.0.6
GPU Type: Tesla T4
Nvidia Driver Version: 470.57.02
CUDA Version: 11.4
CUDNN Version: cudnn-11.4-linux-x64-v8.2.4.15.tgz
Operating System + Version: ubuntu 18.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi ,
We recommend you to check the supported features from the below link.

You can refer below link for all the supported operators list.
For unsupported operators, you need to create a custom plugin to support the operation

Thanks!