Description
We have some GPUs with different compute capabilities (RTX2080Ti,V100,GTX1080Ti,M40 and etc.). How can we build a tensorRT model just once and then run on those GPUs?
Thanks.
Environment
TensorRT Version: 5.0.2.6
GPU Type: RTX2080Ti,V100,GTX1080Ti,M40 and so on
Nvidia Driver Version: 430.40
CUDA Version: cuda10.0
CUDNN Version: cudnn7.3.1
Operating System + Version: Ubuntu 16.04.5 LTS
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered