Error Code: UNSUPPORTED_NODE with Resnet50 from ONNX (from Keras)

Hi,

I am trying to convert a keras model (ResNet50 trained with ImageNet) to TensorRT 5.1.2.2 using ONNX. This is the way I am converting it to an ONNX model. And it seems that it works.

# It creates an ONNX file from a Keras model
def fromKeras2Onnx(outfile='proves.onnx'):
        model = ResNet50(weights='imagenet')
        onnx_model = onnxmltools.convert_keras(model,target_opset=7)
        onnxmltools.utils.save_model(onnx_model, outfile)

However when I convert the ONNX model to TensorRT using this function:

# It creates a TensorRt engine from an ONNX model
def fromOnnx2TensorRtEngine(onnx_file_path, engine_file_path):
        b = False
        with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
                builder.max_workspace_size = 1 << 30  #1GB
                builder.max_batch_size = 1
                builder.fp16_mode = True
                if not os.path.exists(onnx_file_path):
                        print('ONNX file {} not found!'.format(onnx_file_path))
                        exit(0)
                print('Loading ONNX file from path {}'.format(onnx_file_path))
                with open(onnx_file_path,'rb') as model:
                        print('Beginning ONNX file parsing')
                        b = parser.parse(model.read()) # if the parsing was completed b is true
                if b:
                        print('Completed parsing of ONNX file')
                        print('Building an engine from file {}; this may take a while'.format(onnx_file_path))
                        engine = builder.build_cuda_engine(network)
                        del parser
                        if(engine):
                                print('Completed creating Engine')
                                with open(engine_file_path,"wb") as f:
                                        f.write(engine.serialize())
                                        print('Engine saved')
                        else:
                                print('Error building engine')
                                exit(1)
                else:
                        print('Number of errors: {}'.format(parser.num_errors))
                        error = parser.get_error(0) # if it gets more than one error this have to be changed
                        del parser
                        desc = error.desc()
                        line = error.line()
                        code = error.code()
                        print('Description of the error: {}'.format(desc))
                        print('Line where the error occurred: {}'.format(line))
                        print('Error code: {}'.format(code))
                        print("Model was not parsed successfully")
                        exit(0)

I do get an error while it is parsing the ONNX file:

Number of errors: 1
Description of the error: Assertion failed: onnx_padding[0] == 0 && onnx_padding[1] == 0 && onnx_padding[4] == 0 && onnx_padding[5] == 0
Line where the error occurred: 1366
Error code: UNSUPPORTED_NODE
Model was not parsed successfully

My first guess is that there is some layer that is not supported by tensorRT. If this is the case, how can I solve it? Is there a way to know which layers are not supported by tensorRT?

The versions of some libraries/framework I am using:

  • onnx 1.4.1
  • onnxmltools 1.4.0
  • Keras 2.2.4
  • tensorrt 5.1.2.2
  • Thanks for your time :D

    I’ve checked the Keras implementation of ResNet50 to see if I could find any layer or feature that is not implemented in tensorRT and I found several of them that are implemented like activation layers, dense, conv2D layers or batch norm (https://devtalk.nvidia.com/default/topic/1027575/jetson-tx1/how-to-implement-batch-normalization-layer-by-tensorrt-scale-layer-/) , etc. So I do not know where this problem can come from.