How to deal with Cast to INT32 problem when converting ONNX to TensorRT 6

hello ~
I have trained TF 1.14 model and convert to ONNX. After that,
When I converted ONNX to TensorRT 6, I meet following problem.

Parsing model
While parsing node number 1 [Cast -> “_interpolate/Shape__2:0”]:
ERROR: /home/nvidia/package/onnx-tensorrt/builtin_op_importers.cpp:286 In function importCast:
[8] Assertion failed: trt_dtype == nvinfer1::DataType::kHALF && cast_dtype == ::ONNX_NAMESPACE::TensorProto::FLOAT

ENV:
ONNX IR version: 0.0.5
Opset version: 9
Producer name: tf2onnx
Producer version: 1.4.1

How can I deal with this? Thanks

Hi,

Based on the support matrix of ONNT to TensorRT, it only support the FP16->FP32 cast.
https://github.com/onnx/onnx-tensorrt/blob/6.0/operators.md

Cast to INT32 is not supported on the onnx-tensorrt for v6.0.

But it looks like there are much more type support in the v7.0 branch.
You can wait for the TensorRT v7.0 support for Jetson.

Thanks.