Unsupported ONNX data type: UINT8 (2)

I’m tring to convert the tensorflow model zoo implementation of maskRcnn to TensorRT using the onnx parser.
running in ngc continer TensorRT19:02 ( tensorRT v 5.0.2 ) I’m using cuda 10.0 on gtx 1050.

I’ve converted the the save model to onnx format using the current available GitHub - onnx/tensorflow-onnx: Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX version 1.8, using opset 6. ( which is officially supported by tensorRT v 5.0.2)

when I load the generated onnx file i get the error “Unsupported ONNX data type: UINT8 (2)”
code to load the onnx file is taken from nvidia docs:
import tensorrt as trt
TRT_LOGGER = trt.Logger(trt.Logger.WARNING)
with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
with open(my onnx file, ‘rb’) as model:
parser.parse(model.read())

(i’ve found https://github.com/onnx/onnx-tensorrt/issues/400
which suggests to use onnx-surgeon to edit the onnx file, this lead to the following error during parsing " [TensorRT] ERROR: Parameter check failed at: …/builder/Network.cpp::addInput::406, condition: isValidDims(dims)" on which i have not found any suggestions online )

please advise
Omer.

Hi, Request you to share the ONNX model and the script so that we can assist you better.

Alongside you can try validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).

Alternatively, you can try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

Thanks!

hello,

  1. I am not able to upload the onnx file, its 53m . the upload simply fails.

  2. i’ve tried the check_model.py
    it returns:

onnx.checker.check_model(model)
Traceback (most recent call last):
File “”, line 1, in
File “/root/.local/lib/python3.6/site-packages/onnx/checker.py”, line 102, in check_model
C.check_model(protobuf_string)
onnx.onnx_cpp2py_export.checker.ValidationError: No Op registered for LogicalAnd with domain_version of 6

==> Context: Bad node spec: input: “trip_count__153”…
.
.

  1. output of trtexct:
    onnx: model-opset06.onnx

Input filename: model-opset06.onnx
ONNX IR version: 0.0.3
Opset version: 6
Producer name: tf2onnx
Producer version: 1.7.2
Domain:
Model version: 0
Doc string:

Unsupported ONNX data type: UINT8 (2)
ERROR: /home/erisuser/p4sw/sw/gpgpu/MachineLearning/DIT/release/5.0/parsers/onnxOpenSource/ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)
failed to parse onnx file
Engine could not be created
Engine could not be created

thanks,
Omer

Hi @omerbrandis,
If your model is not passing the onnx checker, it wont parse through trtexec.
Hence you first need to check the conversion of tf to onnx.
If the issue persist, you can raise it here .
Also, support to TRT <= 5 has been deprecated, hence we recommend you to use the latest TRT.
Thanks!

Hi NVES, can you help please?

I ran the onnx checker with a model trained with Google AutoML and converted to onnx. Here is the error threw by the checker:

ValidationError: No Op registered for DecodeJpeg with domain_version of <from 10 to 13>