Give me a serialized model , how to determine it is fp32 or fp16?

Description

give me a serialized model , how to determine it is fp32 or fp16? and i donnot know parameters in serializing

Environment

TensorRT Version: 8.2.4
GPU Type: 1660ti
Nvidia Driver Version: 530
CUDA Version: 11.6
CUDNN Version: 8.2.1
Operating System + Version: win10
Python Version (if applicable): 3.9.7
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.9.0
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

my serialized model is serialized with c++ project, can it is validated in python script?

i run it ‘–verbose’,but cannot find information about it



Hi @shaoyan_shi93 ,
I am afraid that the data type is logged in only during the build phase.

Thanks