Multi Stream in TensorRT

Description

A clear and concise description of the bug or issue.

Environment

TensorRT Version -> 7.0:
GPU Type -> RTX:
Nvidia Driver Version -> 440.64:
CUDA Version -> 10.2:
CUDNN Version -> 7.6.5:
Operating System + Version -> Ubuntu 18.04:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag) -> nvcr.io/nvidia/tensorrt:20.03-py3:

Hey,
I want to ask about any resources which guides how to implement multiple streams in parallel on a single engine. I am able to create an engine with a batch size > 1. But I have no idea how to process multiple stream (=batch size) in parallel with single engine!
Thanks

Hi @y14uc339,
Hope these links helps you
https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#perform_inference_c

Thanks!