Is there any layer that fp16 supports but int8 does not?

Description

A clear and concise description of the bug or issue.

Environment

TensorRT Version: 7.1.3
GPU Type:
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System + Version: ubuntu18.04
Python Version (if applicable): 3.6
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.4.0
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi @user48094 ,
Can you please share more details with us to understand the ask.
Thanks!

Hi, Please refer to the below links to perform inference in INT8

Thanks!

I want to deploy my pytorch deep learning model on TX2, so I need to know which layers are suitable for int8 and fp16.

Hi,

The following support matrix may help you.

Thank you.

thank you