Unsupported ONNX data type: UINT8 (2)

Hi,
I am trying to do ssd_mobilenet onnx parsing and engine creation in tensorrt.
But the error here seems to be pointing version mismatches between conversion and what tensorrt’s support is .
I am running this on Nano.
Can you please help on following error .

Error :
Input filename: …/data/mnist/onnx_model.onnx
ONNX IR version: 0.0.5
Opset version: 10
Producer name: tf2onnx
Producer version: 1.5.0
Domain:
Model version: 0
Doc string:
WARNING: ONNX model has a newer ir_version (0.0.5) than this parser was built against (0.0.3).
Unsupported ONNX data type: UINT8 (2)
ERROR: /home/erisuser/p4sw/sw/gpgpu/MachineLearning/DIT/release/5.0/parsers/onnxOpenSource/ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)
ERROR: failed to parse onnx file

maybe related to https://devtalk.nvidia.com/default/topic/1047625/jetson-tx2/running-a-pytorch-network-converted-to-onnx-with-tensorrt-on-the-tx2/1 and fix should be available in TensorRT5.1 and the package in JetPack4.2.1.

Can you please share me link to jetpack 4.2.1, we are not able to find it.
Is this also applicable to Nano ?

Hi NVES,
The link only gives jetpack 4.2, whose disc image size is 5505865KB on windows machine. This same jetpack is running on my nano and this one is giving error as reported above .

Can you please cross check as 4.2.1 seem to be not available to public.
If it is available, please share exact link.

Gentle Reminder to Nvidia team !
Please share the link for 4.2.1, as this fix is badly required for us.

Gentle Reminder !

Please provide link for Jetpack 4.2.1 .

Hi,

If you couldn’t find the link at that time, it probably wasn’t released yet. You can find currently find links to download 4.2.1, 4.2.2, 4.2.3, and various other archived versions at https://developer.nvidia.com/embedded/downloads.

Hi,

I also met this problem that I convert the saved_model.pb from uff_ssd of python sample to onnx.
My command is

python3 -m tf2onnx.convert --opset 10 --fold_const --saved-model ./workspace/models/ssd_inception_v2_coco_2017_11_17/saved_model --output MODEL.onnx

Next, I parser this .onnx by parser.parse() then getting the error of Unsupported ONNX data type: UINT8 (2).

Get model onnx path. /home/chieh/Downloads/TensorRT-7.0.0.11/samples/python/onnx_ssd/utils/../workspace/models/ssd_inception_v2_coco_2017_11_17/ssd_inception_v2_coco_2017_11_17.onnx
TensorRT inference engine settings:
  * Inference precision - DataType.FLOAT
  * Max batch size - 64

Loading ONNX file from path /home/chieh/Downloads/TensorRT-7.0.0.11/samples/python/onnx_ssd/utils/../workspace/models/ssd_inception_v2_coco_2017_11_17/ssd_inception_v2_coco_2017_11_17.onnx...
onnx_file_path /home/chieh/Downloads/TensorRT-7.0.0.11/samples/python/onnx_ssd/utils/../workspace/models/ssd_inception_v2_coco_2017_11_17/ssd_inception_v2_coco_2017_11_17.onnx
Beginning ONNX file parsing
Unsupported ONNX data type: UINT8 (2)
ERROR: Failed to parse the ONNX file.
In node -1 (importInput): UNSUPPORTED_NODE: Assertion failed: convertDtype(onnxDtype.elem_type(), &trtDtype)
Traceback (most recent call last):
  File "voc_evaluation.py", line 495, in <module>
    parsed['trt_engine_datatype'], parsed['max_batch_size'])
  File "/home/chieh/Downloads/TensorRT-7.0.0.11/samples/python/onnx_ssd/utils/inference.py", line 119, in __init__
    engine_utils.save_engine(self.trt_engine, trt_engine_path)
  File "/home/chieh/Downloads/TensorRT-7.0.0.11/samples/python/onnx_ssd/utils/engine.py", line 185, in save_engine
    buf = engine.serialize()
AttributeError: 'NoneType' object has no attribute 'serialize'

Indeed, I checked the input of model which is type: uint8[?,?,?,3].
However, the frozen_inference_graph.pb can convert to .uff, and then successfully build the engine and do inference in Uff_ssd sample.

Is there any method to solve it?
Or should we directly rebuild the architecture and train the model from scratch again?

TensorRT version: 7.0.0.11
Cuda version: 10.2
TensorFlow-gpu: 1.14.0
Cudnn version: 7.6.5
GPU: GTX1060
Ubuntu: 18.04

Thanks!!

I’m having exact same problem as Chieh. I need to migrate to using ONNX rather than deprecated UFF.

I too ran into this problem and it appears as though all of the models (except the quantized ones) in the TensorFlow detection model zoo repo contain input layers with the datatype of unint8. TensorRT is not compatible with this datatype (you know that already). However, the models at ONNX Model Zoo all have input layers with a datatype of float32. Also, in this blog post, Speed up TensorFlow Inference on GPUs with TensorRT, the SavedModels (i.e., protobuf) files that come with the examples also have float32 input layers. I do not know their source though. From my preliminary testing, I was able to convert from pb --> onnx --> trt engine for the ONNX Model Zoo files and the ones posted on the devblog page.

So, with that being said, I think the solution is to do as Chieh suggested. Either re-build the model architecture to have a input layer with a float32 datatype or download any of the ONNX Model Zoo models. In both cases, however, I think you will have to re-train no matter what. Unless someone else responds with a better solution, I think that is the only way (for now).

Any solutions to this problem??

WARNING: ONNX model has a newer ir_version (0.0.5) than this parser was built against (0.0.3).
Unsupported ONNX data type: UINT8 (2)
ERROR: ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)
[05/29/2020-10:13:46] [E] Failed to parse onnx file
[05/29/2020-10:13:46] [E] Parsing model failed
[05/29/2020-10:13:46] [E] Engine could not be created
&&&& FAILED TensorRT.trtexec # ./trtexec --onnx=inception_standard.onnx

Hey!

Even I have the same problem on converting my model to onnx. Any solutions?
Unsupported ONNX data type: UINT8 (2)
ERROR: batch:1:191 In function importInput:
[8] Assertion failed: convertDtype(onnxDtype.elem_type(), &trtDtype)
[06/29/2020-16:30:09] [E] Failed to parse onnx file
[06/29/2020-16:30:09] [E] Parsing model failed
[06/29/2020-16:30:09] [E] Engine creation failed
[06/29/2020-16:30:09] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # trtexec --onnx=/home/stackfusion/Downloads/train_batch_shape.onnx --shapes=input_3:1x200x200x3