We have trained the RetinaNet with Custom Dataset. We have converted the .pt to ONNX model with torch version 1.7 and while we are converting from ONNX to TRT we are facing an issue as pasted below:
[06/02/2021-15:57:20] [E] [TRT] /home/darshan/onnx-tensorrt/ModelImporter.cpp:703: While parsing node number 768 [Cast -> "1348"]:
[06/02/2021-15:57:20] [E] [TRT] /home/darshan/onnx-tensorrt/ModelImporter.cpp:704: --- Begin node ---
[06/02/2021-15:57:20] [E] [TRT] /home/darshan/onnx-tensorrt/ModelImporter.cpp:705: input: "1345"
output: "1348"
name: "Cast_768"
op_type: "Cast"
attribute {
name: "to"
i: 11
type: INT
}
[06/02/2021-15:57:20] [E] [TRT] /home/darshan/onnx-tensorrt/ModelImporter.cpp:706: --- End node ---
[06/02/2021-15:57:20] [E] [TRT] /home/darshan/onnx-tensorrt/ModelImporter.cpp:709: ERROR: /home/darshan/onnx-tensorrt/builtin_op_importers.cpp:294 In function importCast:
[6] Assertion failed: convertDtype(onnxType, &dtype) && "Unsupported cast!"
[06/02/2021-15:57:20] [E] Failed to parse onnx file
[06/02/2021-15:57:20] [E] Parsing model failed
[06/02/2021-15:57:20] [E] Engine creation failed
[06/02/2021-15:57:20] [E] Engine set up failed
The issue is happening with the cast operator.
May I know what is going wrong with this model and how to resolve it? It would be a great help from your side if you can help me to resolve this issue.
I have also attached the file with the detailed verbose below.
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command. https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!
Thanks for your reply.
Can we change the DOUBLE (11) to DOUBLE (8 or 7) in the Cast Op through ONNX Graphsurgeon API?
What I observed is that: I had converted one ONNX model(which had Cast Op with DOUBLE (7)) to TRT successfully.
I checked the whole code implementation and came to know that no where I had used Double data type. But not sure how it is occurring.
Can you please assist me to solve this issue?
Thanks for your reply. We have modified the Architecture of the RetinaNet in order to meet our requirements. And we also tried it earlier which did not work. Since there is only an issue with Datatype, how about modifying the attribute in the Cast Op?