Concurrently run two or more engine on a tensorrt

Description

Hi,
how can i run two engine (for example two mode of yolov3) into a jetson(concurrently) , models are different.
how can i allocate memory & context in multiple thread.
i imply it on a jetson xavier

Environment

TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,

Please refer below links:


Thanks

Hi,
Both comments state that it is possible to run two engines into tensorrt, but how? I’ve even seen this on the link below:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#serial_model_c

but i need sample code…
Many Thanks

To make it easy for others who might search for the same thing, please keep discussion to a single thread.

Here is a post in an above linked thread with a link to some example code.