I tried to convert model to tensorrt engine in Jetson Nano using onnx-tensorrt tools, but it crashed.
The log is like this:
[2021-03-12 10:46:08 INFO] 877:Mul → (3, 40, 40, 2)
[2021-03-12 10:46:08 INFO] 878:Constant →
Unsupported ONNX data type: DOUBLE (11)
[2021-03-12 10:46:08 INFO] 879:Sub → (3, 40, 40, 2)
[2021-03-12 10:46:08 INFO] 880:Constant → (1, 1, 40, 40, 2)
[2021-03-12 10:46:08 INFO] 881:Add → (3, 40, 40, 2)
[2021-03-12 10:46:08 INFO] 882:Constant →
[2021-03-12 10:46:08 INFO] 883:Mul → (3, 40, 40, 2)
While parsing node number 438 [Cast → “884”]:
— Begin node —
input: “883”
output: “884”
op_type: “Cast”
attribute {
name: “to”
i: 1
type: INT
}
— End node —
ERROR: /home/trter/onnx-tensorrt-6.0/builtin_op_importers.cpp:700 In function importCast:
[8] Assertion failed: trt_dtype == nvinfer1::DataType::kHALF && cast_dtype == ::ONNX_NAMESPACE::TensorProto::FLOAT
and I got my onnx model in a Pytorch 1.2 env, is the version right?
Any advice? Or do I have to upgrade trt version?
Environment
TensorRT Version: Tensorrt 6 GPU Type: included in Jetson Nano Jetpack 4.3 Nvidia Driver Version: CUDA Version: included in Jetson Nano Jetpack 4.3 CUDNN Version: included in Jetson Nano Jetpack 4.3 Operating System + Version: included in Jetson Nano Jetpack 4.3 Python Version (if applicable): included in Jetson Nano Jetpack 4.3 TensorFlow Version (if applicable): PyTorch Version (if applicable): pytorch 1.2 Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command. https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!
— End node —
ERROR: builtin_op_importers.cpp:727 In function importCast:
[8] Assertion failed: trt_dtype == nvinfer1::DataType::kHALF && cast_dtype == ::ONNX_NAMESPACE::TensorProto::FLOAT
[02/12/2021-20:39:55] [E] Failed to parse onnx file
[02/12/2021-20:39:55] [E] Parsing model failed
[02/12/2021-20:39:55] [E] Engine could not be created
&&&& FAILED TensorRT.trtexec # ./trtexec --onnx=/home/trter/onnx_trt_models/model.onnx --verbose
Almost the same error, How to share the onnx model and the script means?
Sadly, our platform only supports 6.0 now, it may take a while to upgrade. Can you tell me how to use 32-bit floats? I’ve tried convert my model to float in many ways, but it all failed.