[TensorRT] ERROR: Parameter check failed at: engine.cpp::resolveSlots::1318, condition: allInputDimensionsSpecified(routine)

Description

I build Retinaface model from Pytorch → Onnx → TensorRT.
Firstly, i build .onnx with dynamic batch as follow

input_names = ["input"]
    output_names = ["output"]
    inputs = torch.randn(1, 3, args.long_side, args.long_side).to(device)
    dynamic_axes = {
        'input': {
            0: 'batch_size'
        },
        'output': {
            0: 'batch_size'
        }
    }

    torch_out = torch.onnx._export(net, 
        inputs,
        output_onnx,
        export_params=True,
        verbose=False,
        input_names=input_names,
        output_names=output_names,
        dynamic_axes=dynamic_axes,
        opset_version=11
    )

Then i use trtexec command to generate engine files:

trtexec --onnx=Retinaface_m25_dynamic_batch.onnx --verbose --explicitBatch --minShapes=input:1x3x640x640 --optShapes=input:4x3x640x640 --maxShapes=input:8x3x640x640 --shapes=input:5x3x640x640 --saveEngine=./models/weights/retinaface.trt

I inference as follow in onnx_yolov3 python example in tensorrt repo and got this error:

[TensorRT] ERROR: Parameter check failed at: engine.cpp::resolveSlots::1318, condition: allInputDimensionsSpecified(routine)

i debug and it happen after this line:

context.execute_async(bindings=bindings, stream_handle=stream.handle)

I don’t know how to fix this problem. Plz help me.

Environment

TensorRT Version: tensorrt-7.2.3.4
GPU Type: T4
Nvidia Driver Version: 455.32
CUDA Version: 11.1
CUDNN Version: 8.0.5
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.7
PyTorch Version (if applicable): 1.9
Baremetal or Container (if container which image + tag):

Relevant Files

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi @NVES ,
I already checked by onnx script and i also use trt loadEngine to verify it worked, here is trtexec with --verbose option and onnx model .
Retinaface_m25_dynamic_batch.onnx (1.6 MB)
verbose_trt.txt (740.8 KB)

1 Like

Please share us issue repro inference script to try from our end.

Sorry i can’t share my script but it look like this repo

I made a little bit change from that. But not much.

Please help me! I can’t find any solution for this.

Hi @nguyenkhacduyngoc,

Its may be difficult to debug without inference script and complete error logs. You may be DM us. Please make sure you’re creating stream and allocating memory correctly.

Hi, tks for your help.

I finally found the solution, it was my bad.
My script didn’t contain the set_binding_shape for context in new inference.

context.set_binding_shape(0, (BATCH, 3, INPUT_H, INPUT_W))
3 Likes