Unsupported onnx type DOUBLE 11

Description

while trying to convert an ONNX model to tensorRT using trtexec we are having this error

Environment

TensorRT Version: 7.1.3
GPU Type: gtx 1080
Nvidia Driver Version: 460.80
CUDA Version: 11.2
CUDNN Version: 8
Operating System + Version:
Python Version (if applicable): 3.6
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.6.0
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/tensorrt:20.09-py3

attached the screenshot for you reference

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#onnx-export

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

attached the model link

command we used
trtexec --onnx= jde_608x1088_11.onnx --explicitBatch --workspace=16382 --optShapes=input:1x3x608x1088 --maxShapes=input:1x3x608x1088 --minShapes=input:1x3x608x1088 --saveEngine=model.plan

@NVES the details given are enough to replicate the issue?

Hi @sivagurunathan.a,

Based on error, looks like you’re using Double data type in your model. Old version of TensorRT doesn’t support Double data type. On latest TRT version 8.0, we have support for Double data type. On latest version we are unable to reproduce this issue and able to successfully build TRT engine.
Please refer onnx-tensorrt/operators.md at master · onnx/onnx-tensorrt · GitHub for more details.

Thank you.

1 Like