Allocating memory strategy for dynamic output that is determined at inference time

Description

I converted Lightglue Onnx model to tensorrt engine and I want to inference with it. This model is a feature matcher so its outputs are correspondent to given features, therefore its outputs are dynamic at inference time. How should I approach to problem to how to allocate enough memory for output?

Environment

TensorRT Version: 8.4.1.5
GPU Type: Jetson Xavier NX
Nvidia Driver Version:
CUDA Version: 11.4
CUDNN Version:
Operating System + Version: Jetpack 35.1.0
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):