TensorRT mix precision with supported hard drives

Description

Hi! I am currently trying to transfer the tensorrt model from the server to my computer, and I don’t know if the graphic card on my computer will support the inference process.
The reason is that the tensorrt model set up on the server has enabled the mix-precision(float16 and int8 mode). So I wonder what the minimum requirement of the graphic card is(Titan V on the server) that allows the mix-precision model to do the inference.

Environment

*TensorRT Version: 7.2.2.3
GPU Type: server: Titan V; PC:GTX1050
Nvidia Driver Version: 450.51.05
CUDA Version: 11.0
CUDNN Version: 8.0.4
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.7
TensorFlow Version (if applicable): 1.14
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Hi ,
We recommend you to check the supported features from the below link.
https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html
You can refer below link for all the supported operators list.
For unsupported operators, you need to create a custom plugin to support the operation

Thanks!

Thank you for your reply!
I have checked the fourth section "Hardware and precision"in the link:
Support Matrix :: NVIDIA Deep Learning TensorRT Documentation.
I found that the int8 mode requires at least 6.1 compute capability (Tesla P4), so for my own PC, if I use general GTX graphic card, I would guess the least requirement is GTX1060?

Hi @364083042,

Yes. You may find this link is useful, to know CUDA compute capability of the devices.

Thank you.