[6] Assertion failed: convertDtype(onnxType, &dtype) && "Unsupported cast!"

Hi Nvidia Team,

We have trained the RetinaNet with Custom Dataset. We have converted the .pt to ONNX model with torch version 1.7 and while we are converting from ONNX to TRT we are facing an issue as pasted below:

[06/02/2021-15:57:20] [E] [TRT] /home/darshan/onnx-tensorrt/ModelImporter.cpp:703: While parsing node number 768 [Cast -> "1348"]:
[06/02/2021-15:57:20] [E] [TRT] /home/darshan/onnx-tensorrt/ModelImporter.cpp:704: --- Begin node ---
[06/02/2021-15:57:20] [E] [TRT] /home/darshan/onnx-tensorrt/ModelImporter.cpp:705: input: "1345"
output: "1348"
name: "Cast_768"
op_type: "Cast"
attribute {
  name: "to"
  i: 11
  type: INT

[06/02/2021-15:57:20] [E] [TRT] /home/darshan/onnx-tensorrt/ModelImporter.cpp:706: --- End node ---
[06/02/2021-15:57:20] [E] [TRT] /home/darshan/onnx-tensorrt/ModelImporter.cpp:709: ERROR: /home/darshan/onnx-tensorrt/builtin_op_importers.cpp:294 In function importCast:
[6] Assertion failed: convertDtype(onnxType, &dtype) && "Unsupported cast!"
[06/02/2021-15:57:20] [E] Failed to parse onnx file
[06/02/2021-15:57:20] [E] Parsing model failed
[06/02/2021-15:57:20] [E] Engine creation failed
[06/02/2021-15:57:20] [E] Engine set up failed

The issue is happening with the cast operator.

May I know what is going wrong with this model and how to resolve it? It would be a great help from your side if you can help me to resolve this issue.
I have also attached the file with the detailed verbose below.

Darshan C G

retinanet_verbose (468.8 KB)

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet


import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging

Hi @darshancganji12,

Based on verbose logs, it looks like you’re using Double data type.

[06/02/2021-15:57:20] [V] [TRT] Searching for input: 1345
[06/02/2021-15:57:20] [V] [TRT] Cast_768 [Cast] inputs: [1345 → (-1, -1, 1)],
Unsupported ONNX data type: DOUBLE (11)

We recommend you to do not use double or int64 data type in your model because TensorRT doesn’t support these.

Thank you.

Hi @spolisetty,

Thanks for your reply.
Can we change the DOUBLE (11) to DOUBLE (8 or 7) in the Cast Op through ONNX Graphsurgeon API?
What I observed is that: I had converted one ONNX model(which had Cast Op with DOUBLE (7)) to TRT successfully.

Screenshot from 2021-06-02 21-07-38

Looking forward to your reply.


Hi @spolisetty,

I checked the whole code implementation and came to know that no where I had used Double data type. But not sure how it is occurring.
Can you please assist me to solve this issue?


Hello Darshan, I would recommend using the following link for RetinaNet TensorRT conversion.

NVIDIA/retinanet-examples: Fast and accurate object detection with end-to-end GPU optimization (github.com)

Hi @mmakwana,

Thanks for your reply. We have modified the Architecture of the RetinaNet in order to meet our requirements. And we also tried it earlier which did not work. Since there is only an issue with Datatype, how about modifying the attribute in the Cast Op?


Hi @darshancganji12,

Can we change the DOUBLE (11) to DOUBLE (8 or 7) in the Cast Op through ONNX Graphsurgeon API?

This is possible. ONNX Graphsurgeon API allows you to modify a tensor’s dtype. Please refer

Thank you.