Onnx to tensorrt conversion fails

Description

trtexec --onnx=my_model.onnx --batch=1 --saveEngine=test.engine --verbose
fails with the below error
[06/16/2021-12:03:24] [V] [TRT] ImporterContext.hpp:116: Registering tensor: 498 for ONNX tensor: 498
[06/16/2021-12:03:24] [V] [TRT] ModelImporter.cpp:179: Cast_9 [Cast] outputs: [498 → ()],
[06/16/2021-12:03:24] [V] [TRT] ModelImporter.cpp:103: Parsing node: Neg_10 [Neg]
[06/16/2021-12:03:24] [V] [TRT] ModelImporter.cpp:119: Searching for input: 498
[06/16/2021-12:03:24] [V] [TRT] ModelImporter.cpp:125: Neg_10 [Neg] inputs: [498 → ()],
ERROR: onnx2trt_utils.cpp:1686 In function unaryHelper:
[8] Assertion failed: validUnaryType
[06/16/2021-12:03:24] [E] Failed to parse onnx file
[06/16/2021-12:03:24] [E] Parsing model failed
[06/16/2021-12:03:24] [E] Engine creation failed
[06/16/2021-12:03:24] [E] Engine set up failed

Environment

TensorRT Version: 7.1.3.4
GPU Type: GeForce RTX 3090 , 1070
Nvidia Driver Version: 460.32.03
CUDA Version: 11.2
CUDNN Version: 8.1.1.33
Operating System + Version: ubuntu 20.04

Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

modelmodel
loglog

Steps To Reproduce

trtexec --onnx=midas.onnx --batch=1 --saveEngine=test.engine --verbose

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

thanks for your response ! I did share the links of log, the model earlier. Please find them my_model.onnx - Google Drive
trt.log - Google Drive
Inference from the onnxmodel works fine.
The following snippet outputs
input.1
input shape [1, 3, 512, 512]
output name 1349

import onnxruntime
model_name = “my_model.onnx”
sess = onnxruntime.InferenceSession(model_name)

input_name = sess.get_inputs()[0].name
print(input_name) # input.1

input_shape = sess.get_inputs()[0].shape
print(“input shape”, input_shape) # [1, 3, 384, 672]

output_name = sess.get_outputs()[0].name
print(“output name”, output_name)

Hi @eyebies,

It is known issue. Fix will be available soon in future releases.

Thank you.

1 Like

Thanks polisetty for your reponse ! Would it be possible to explain whats the actual cause so that I could alter my model !
thanks again!

@eyebies,

The main reason for this issue is TRT currently does not support INT32 types for the NEG operator. Fix for this will be available soon in future releases.

1 Like

Hello. Any updates regarding the issue? Currently using TRT v. 7.2.1 and got the same error.