Hi,
how can i run two engine (for example two mode of yolov3) into a jetson(concurrently) , models are different.
how can i allocate memory & context in multiple thread.
i imply it on a jetson xavier
Environment
TensorRT Version: GPU Type: Nvidia Driver Version: CUDA Version: CUDNN Version: Operating System + Version: Python Version (if applicable): TensorFlow Version (if applicable): PyTorch Version (if applicable): Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)