Description
I have one class and multiple objects of this class. Each object should do an inference. What are the options to do that?
I decided that each object has his own context. Is this right?
Also multithreading should be possible.
Are there some examples to this topic?
Environment
TensorRT Version: TensorRT-8.0.1.6_CUDA_11.3
GPU Type: Nvidia Titan RTX
Nvidia Driver Version:
CUDA Version: 11.3
CUDNN Version: 8.2.0
Operating System + Version: Windows 10
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):