[8] Assertion failed: trt_dtype == nvinfer1::DataType::kHALF && cast_dtype == ::ONNX_NAMESPACE::TensorProto::FLOAT

Description

I tried to convert model to tensorrt engine in Jetson Nano using onnx-tensorrt tools, but it crashed.
The log is like this:
[2021-03-12 10:46:08 INFO] 877:Mul → (3, 40, 40, 2)
[2021-03-12 10:46:08 INFO] 878:Constant →
Unsupported ONNX data type: DOUBLE (11)
[2021-03-12 10:46:08 INFO] 879:Sub → (3, 40, 40, 2)
[2021-03-12 10:46:08 INFO] 880:Constant → (1, 1, 40, 40, 2)
[2021-03-12 10:46:08 INFO] 881:Add → (3, 40, 40, 2)
[2021-03-12 10:46:08 INFO] 882:Constant →
[2021-03-12 10:46:08 INFO] 883:Mul → (3, 40, 40, 2)
While parsing node number 438 [Cast → “884”]:
— Begin node —
input: “883”
output: “884”
op_type: “Cast”
attribute {
name: “to”
i: 1
type: INT
}

— End node —
ERROR: /home/trter/onnx-tensorrt-6.0/builtin_op_importers.cpp:700 In function importCast:
[8] Assertion failed: trt_dtype == nvinfer1::DataType::kHALF && cast_dtype == ::ONNX_NAMESPACE::TensorProto::FLOAT

and I got my onnx model in a Pytorch 1.2 env, is the version right?

Any advice? Or do I have to upgrade trt version?

Environment

TensorRT Version: Tensorrt 6
GPU Type: included in Jetson Nano Jetpack 4.3
Nvidia Driver Version:
CUDA Version: included in Jetson Nano Jetpack 4.3
CUDNN Version: included in Jetson Nano Jetpack 4.3
Operating System + Version: included in Jetson Nano Jetpack 4.3
Python Version (if applicable): included in Jetson Nano Jetpack 4.3
TensorFlow Version (if applicable):
PyTorch Version (if applicable): pytorch 1.2
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi !thx!

  1. used
    model = onnx.load(f) # load onnx model
    onnx.checker.check_model(model) # check onnx model
    already and nothing seems wrong

  2. this is the trtexec running log on Jetson Nano by “./trtexec --onnx ***.onnx --verbose”
    [02/12/2021-20:39:55] [V] [TRT] 876:Constant →
    [02/12/2021-20:39:55] [V] [TRT] 877:Mul → (3, 40, 40, 2)
    [02/12/2021-20:39:55] [V] [TRT] 878:Constant →
    Unsupported ONNX data type: DOUBLE (11)
    [02/12/2021-20:39:55] [V] [TRT] 879:Sub → (3, 40, 40, 2)
    [02/12/2021-20:39:55] [V] [TRT] 880:Constant → (1, 1, 40, 40, 2)
    [02/12/2021-20:39:55] [V] [TRT] 881:Add → (3, 40, 40, 2)
    [02/12/2021-20:39:55] [V] [TRT] 882:Constant →
    [02/12/2021-20:39:55] [V] [TRT] 883:Mul → (3, 40, 40, 2)
    While parsing node number 438 [Cast → “884”]:
    — Begin node —
    input: “883”
    output: “884”
    op_type: “Cast”
    attribute {
    name: “to”
    i: 1
    type: INT
    }

— End node —
ERROR: builtin_op_importers.cpp:727 In function importCast:
[8] Assertion failed: trt_dtype == nvinfer1::DataType::kHALF && cast_dtype == ::ONNX_NAMESPACE::TensorProto::FLOAT
[02/12/2021-20:39:55] [E] Failed to parse onnx file
[02/12/2021-20:39:55] [E] Parsing model failed
[02/12/2021-20:39:55] [E] Engine could not be created
&&&& FAILED TensorRT.trtexec # ./trtexec --onnx=/home/trter/onnx_trt_models/model.onnx --verbose

Almost the same error, How to share the onnx model and the script means?

Hi @zyj89diswiss,

We don’t support doubles. Can you please use 32-bit floats instead. We also recommend you to use latest 7.2.x trt version.

Thank you.

Sadly, our platform only supports 6.0 now, it may take a while to upgrade. Can you tell me how to use 32-bit floats? I’ve tried convert my model to float in many ways, but it all failed.

Hi @zyj89diswiss,

We request you to share the onnx model and relevant scripts for better assistance.

Thank you.