TRT Runtime Parameter Check Failure related to allInputDimensionsSpecified

Description

I have converted a TF YoloV2 model to ONNX. This works succesfully.
There seems to be dynamic shapes due to the ‘Reorg’ section prior to concatination in the YoloV2 architecture.

I have defined the optimzation profiles and the engine builds successfully. However I am reciving this runtime error when trying to inference the network. Any ideas of cause would be greatly appreciated

ERROR → [TensorRT] ERROR: Parameter check failed at: engine.cpp::resolveSlots::1092, condition: allInputDimensionsSpecified(routine)

Environment

TensorRT Version: 7.0
GPU Type: RTX 2060 SUPER
Nvidia Driver Version: 440.82
CUDA Version: 10.2
CUDNN Version:
Operating System + Version: Ubuntu 18
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): TRT 20.03 NGC container

Relevant Files

----Engine Build and Serialize:—

EXPLICIT_BATCH = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
with trt.Builder(TRT_LOGGER) as builder:
config = builder.create_builder_config()
profile = builder.create_optimization_profile()
profile.set_shape(“input_1”, (1, 416, 416, 3), (1, 416, 416, 3), (1, 416, 416, 3))
profile.set_shape(“input_2”, (1, 1, 1, 1, 10, 4), (1, 1, 1, 1, 10, 4), (1, 1, 1, 1, 10, 4))
config.add_optimization_profile(profile)
network = builder.create_network(EXPLICIT_BATCH)
with trt.OnnxParser(network, TRT_LOGGER) as parser:
builder.max_batch_size = 1
builder.max_workspace_size = 1 << 32
with open(‘model.onnx’, ‘rb’) as model:
parser.parse(model.read())
for error in range(parser.num_errors):
print(‘ERR:’+parser.get_error(error))

    engine = builder.build_engine(network, config)
    buf = engine.serialize()
    with open('onnx.trt', 'wb') as f:
        f.write(buf)

Could you please check if you have set the input size for the preprocessor during inference using setBindingDimensions?
Please refer below sample:

Thanks

1 Like