Fp16 engine(generate on windows with TRT861) get stuck on linux (TRT 861)

Description

The fp16 engine generated on windows is stuck when infer in the linux(same environment). But the fp32 model generated on window runs normally on linux. In addition, the fp16 engine generated on linux also works fine on linux.

Environment

TensorRT Version: TRT861
GPU Type: 3070
Nvidia Driver Version: 537.13
CUDA Version: 12.1
CUDNN Version: 8.9.1
Operating System + Version: win10 + ubuntu22.02
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered## Description

(1)

Environment

TensorRT Version: TRT861
GPU Type: 3070
Nvidia Driver Version: 537.13
CUDA Version: 12.1
CUDNN Version: 8.9.1
Operating System + Version: win10 + ubuntu22.02
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi ,
We recommend you to check the supported features from the below link.

You can refer below link for all the supported operators list.
For unsupported operators, you need to create a custom plugin to support the operation

Thanks!