Unsupported ONNX data type: UINT8 (2)

Same problem for me when trying to read in onnx file with TensorRT onnx parser. This seems odd, since on onnx/IR.md at main · onnx/onnx · GitHub it specifically says it supports uint8 tensor element types. Is there something unique about “UINT8 (2)” or bug in parser?

----------------------------------------------------------------
Input filename:   ../resources/mars-small128_batch_1.onnx
ONNX IR version:  0.0.6
Opset version:    11
Producer name:    tf2onnx
Producer version: 1.6.3
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
Unsupported ONNX data type: UINT8 (2)
ERROR: images:0:188 In function importInput:
[8] Assertion failed: convertDtype(onnxDtype.elem_type(), &trtDtype

UPDATE:
Looks like the “(2)” in UINT8 (2) is just the enum value of the type. I wrote a python script to edit the type of the input layer (as well as set the batch size), but this created another error:

map/while/strided_slice: out of bounds slice, input dimensions = [128,64,3], start = [0,0,3], size = [128,64,3], stride = [1,1,-1].
Layer map/while/strided_slice failed validation

So I’m investigating that now.

In case anyone is interested, here’s my python script for changing the input node type and the network batch size of an onnx model file.

import onnx

def change_input_datatype(model, typeNdx):
    # values for typeNdx
    # 1 = float32
    # 2 = uint8
    # 3 = int8
    # 4 = uint16
    # 5 = int16
    # 6 = int32
    # 7 = int64
    inputs = model.graph.input
    for input in inputs:
        input.type.tensor_type.elem_type = typeNdx
        dtype = input.type.tensor_type.elem_type
        

def change_input_batchsize(model, batchSize):
    inputs = model.graph.input
    for input in inputs:                
        dim1 = input.type.tensor_type.shape.dim[0]        
        dim1.dim_value = batchSize
        #print("input: ", input)  # uncomment to see input layer details

    

def change_output_batchsize(model, batchSize):    
    outputs = model.graph.output
    for output in outputs:            
        dim1 = output.type.tensor_type.shape.dim[0]        
        dim1.dim_value = batchSize
        #print("output: ", output) #uncomment to see output layer details


onnx_model = onnx.load(<path to your original onnx model file>)

change_input_datatype(onnx_model, 1)
change_input_batchsize(onnx_model, 1)
change_output_batchsize(onnx_model, 1)

onnx.save(onnx_model, <path to your edited onnx model file>)