Description
We will have several TensorRT (ONNX) models running in parallel on Jetson Nano. To avoid overloading the GPU, we plan to execute a few of them on CPU. Does TensorRT support that?
Environment
TensorRT Version: 7.1
GPU Type: 128-core Maxwell
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System + Version: Ubuntu 18.04.5
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered