Description
Hi, I had installed TensorRT and running resnet50 model’s inference successfully.
I know that TensorRT will use tensor cores to achieve the best performance by default.
But I am wondering that is there any way to turn off this flag?
It means that I want to run inference with TensorRT while not using tensor cores in my device.
Thank you very much
Environment
TensorRT Version: 7.2.3
GPU Type: Tesla T4
Nvidia Driver Version: 460.32.03
CUDA Version: 11.2
CUDNN Version: 8
Operating System + Version: ubuntu 16.04
Python Version (if applicable): 3.8.5
TensorFlow Version (if applicable): 2.4.1
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered