Failed to build engine caused by the dynamic input error

Description

dynamic input is missing dimensions in profile 0

Environment

TensorRT Version: 8.4.1.5
GPU Type: RTX2060
Nvidia Driver Version: 470.141.03
CUDA Version: 11.3
CUDNN Version: 8.2.1
Operating System + Version: Ubuntu 22.04
Python Version (if applicable): 3.8
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.12.1
Baremetal or Container (if container which image + tag):

Relevant Files

hawp_b.onnx (8.0 MB)

Steps To Reproduce

The uploaded onnx has passed the onnx_check and it’s definitely a simple network which I have extracted from the whole onnx, and when I convert it into engine, following errors emerge:

[10/26/2022-23:41:34] [TRT] [E] 4: [network.cpp::validate::2997] Error Code 4: Internal Error (onnx::MaxPool_1635: dynamic input is missing dimensions in profile 0.)
[10/26/2022-23:41:34] [TRT] [E] 2: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. )

which maybe the fault of dynamic inputs, my convertion Python code is as follows:

def onnx2trt(onnx_path, engine_path):
    logger = trt.Logger(trt.Logger.ERROR)
    builder = trt.Builder(logger)
    flag = (1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))
    network = builder.create_network(flag)
    parser = trt.OnnxParser(network, logger)
    with open(onnx_path, 'rb') as model:
        if not parser.parse(model.read()):
            print('Error: Failed to parse the ONNX file.')
            for error in range(parser.num_errors):
                print(parser.get_error(error))
            return None
    config = builder.create_builder_config()
    config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, 1 << 30)
    profile = builder.create_optimization_profile()
    # profile.set_shape_input('onnx::MaxPool_1635', [-1, 128, 32], [1000, 128, 32], [9999, 128, 32])
    profile.set_shape_input('onnx::MaxPool_1635', *[[-1, 128, 32]] * 3)
    config.add_optimization_profile(profile)

    engine = builder.build_engine(network, config)
    # for i in range(network.num_inputs):
    #     tensor = network.get_input(i)
    #     print(tensor.name, trt.nptype(tensor.dtype), tensor.shape)
    # print(network.num_layers)
    # for i in range(network.num_outputs):
    #     tensor = network.get_output(i)
    #     print(tensor.name, tensor.shape)
    serialized_engine = builder.build_serialized_network(network, config)
    with open(engine_path, "wb") as f:
        f.write(serialized_engine)
        print("onnx2trt successful!")

can anyone solve my problem since i’m really a new beginner…

Hi @pastelkwx ,
The issue looks like with the way you are defining shapes.
The format of defining shapes should be something like

Could you please try trtexec command in verbose mode?
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

Thanks

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.