Tensorrt ONNX build engine ERROR

build engine ERROR on tensorrt ONNX

I’m trying to use multi-batch for my model. My codes referenced the onnx_to_tensorrt_multibatch.py

But I encountered an issue that I have no ideas to deal with it.
I have uploaded the error output, please take a look.

TensorRT version is 7.1.0 on Jetson NX Xaiver

issue_output.txt (67.6 KB)

I_DEEP = 1
I_WIDTH = 128
I_HEIGHT = 128

INPUT_NAME = "x.1"
INPUT_NAME = "input.1"

def get_engine(onnx_file_path, engine_file_path=""):
"""Attempts to load a serialized engine if available, otherwise builds a new TensorRT engine and saves it."""

def build_engine():
    """Takes an ONNX file and creates a TensorRT engine to run inference with"""
    with trt.Builder(TRT_LOGGER) as builder, builder.create_network(
            rtr_common.EXPLICIT_BATCH) as network, builder.create_builder_config() as config, \
            trt.OnnxParser(network, TRT_LOGGER) as parser:
        builder.max_workspace_size = 1 << 28  # 256MiB
        builder.max_batch_size = 1
        # Parse model file
        if not os.path.exists(onnx_file_path):
            print('ONNX file {} not found: '.format(onnx_file_path))
        print('Loading ONNX file from path {}...'.format(onnx_file_path))
        with open(onnx_file_path, 'rb') as model:
            print('Beginning ONNX file parsing')
            if not parser.parse(model.read()):
                print('ERROR: Failed to parse the ONNX file.')
                for error in range(parser.num_errors):
                return None

        network.get_input(0).shape = [-1, I_DEEP, I_WIDTH, I_HEIGHT]
        profile = builder.create_optimization_profile()
        profile.set_shape(INPUT_NAME, (1, I_DEEP, I_WIDTH, I_HEIGHT), (16, I_DEEP, I_WIDTH, I_HEIGHT),
                          (32, I_DEEP, I_WIDTH, I_HEIGHT))
        # The actual yolov3.onnx is generated with batch size 64. Reshape input to batch size 1
        # network.get_input(0).shape = [1, 3, 608, 608]
        print('Completed parsing of ONNX file')
        print('Building an engine from file {}; this may take a while...'.format(onnx_file_path))

        engine = builder.build_engine(network, config)
        print("Completed creating Engine")
        with open(engine_file_path, "wb") as f:
        return engine

if os.path.exists(engine_file_path):
    # If a serialized engine exists, use it instead of building an engine.
    print("Reading engine from file {}".format(engine_file_path))
    with open(engine_file_path, "rb") as f, trt.Runtime(TRT_LOGGER) as runtime:
        return runtime.deserialize_cuda_engine(f.read())
    return build_engine()


It looks like you are facing the similar issue of this topic:

May I know which opset format are used for your onnx model?
Suppose you should use opset-11 for the TensorRT-7.1 compatibility:


Thanks for your quick response.
From you link, I found that the poster sloved the problem by replaced the pytorch APIs with tensorrt APIs, did my understand correct?
My problem is, the model was created by other pepole, and I haven’t the source codes, I only have the trained onnx file. I could not change it.
How to fix it under this situation?

Thanks again!

I have uploaded the onnx file to https://github.com/lzzyha/myfiles/blob/master/rc1bn_d5_f32_i128_o128_m3_c1.zip

could you kindly try to parse it and take a look?


We can build your model with JetPack4.4 product release (TensorRT 7.1.3).
Would you mind to reflash your device and give it a try?

/usr/src/tensorrt/bin/trtexec --onnx=rc1bn_d5_f32_i128_o128_m3_c1.onnx --explicitBatch --batch=1 --workspace=4096