Not able to inference multiple input models using TRT

Description

While doing the inference on multi-input models using the snippet below-

saved_model_dir_trt =
root = tf.saved_model.load(saved_model_dir_trt)
infer(load_test_data)

where, (load_test_data) is the function which returns a list of preprocessed images to be fed into the
model trained.

Environment

TensorRT Version: 7.1.3
GPU Type:
Nvidia Driver Version: 460
CUDA Version: 11.2
CUDNN Version: 8.0.2
Operating System + Version: Ubuntu18
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 2.4.0

Hi,
The below link might be useful for you
https://docs.nvidia.com/deeplearning/tensorrt/best-practices/index.html#thread-safety

https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__STREAM.html
For multi threading/streaming, will suggest you to use Deepstream or TRITON
For more details, we recommend you to raise the query to the Deepstream or TRITON forum.

Thanks!