All outputs are nan

Description

onnx model converted to tensorRt engine correctly. but all outputs of inference are nan.

Environment

TensorRT Version: 8.4.1
GPU Type: 3060
Nvidia Driver Version: 463
CUDA Version: 11.2
CUDNN Version: 8.2.1
Operating System + Version: win10

outputs:

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi,
When I run my engine model with trtexec command, all outputs are nan.
When I run my engine model with C++, all outputs are nan.
However, when I run my engine model with python, all outputs are correct.
I’m looking forward to your advice!

Could you please share with us the issue repro ONNX model and verbose logs for better debugging.

Thank you.

I can’t share the model (maybe some “dummy” model in private later if necessary). Any advice why this might happen will be much appreciated!

Hi,

Please refer following similar issue. Please make sure, your input is correct.

Thank you.