Tensorrt output NAN for FP16 and for FP16+INT8

Description

I infer a trt model with two inputs and two outputs. When inferring based on the real data, for the FP16 version, both two outputs are all NAN and for the FP16+INT8+optimization level of 5 model, one output is valid and the other is all NAN. But the onnx model runs well in onnxruntime.

Models are in the link: 链接: https://pan.baidu.com/s/11DkowpUSHdoruuAL0JOk8g?pwd=4x96 提取码: 4x96 复制这段内容后打开百度网盘手机App,操作更方便哦

Environment

TensorRT Version: 10.3
GPU Type: Jetson agx orin
Nvidia Driver Version:
CUDA Version: 12.6
CUDNN Version: 9.3
Operating System + Version: aarch64+Ubuntu22.04+Jetpack6.1
Python Version (if applicable): 3.10
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 2.5.0
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered