FP16 gets wrong results with dynamic input

Description

Convert dynamic onnx to trt, gets wrong results in fp16.

Environment

TensorRT Version: 8.2.1.8
GPU Type: 2070super
Nvidia Driver Version: 470.42
CUDA Version: 11.4
CUDNN Version: 8.2
Operating System + Version: Ubuntu 18.04

Relevant Files

dynamic_onnx.zip (56.4 MB)

Steps To Reproduce

FP32 can get right results, data range in 0-255,
but FP16 results are all “nan”.

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#onnx-export

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

I have checked onnx, it’s ok.
Here is verbose log.
info.log (18.3 MB)

Hi,

We could not reproduce the issue. Could you please share us issue repro script.

Thank you.