Inputa data to Quantized model

Description

What type of data should be fed to a PTQ model? fp32 (assuming the engine performs conversion or something…) or int8?

Does the calibrator should be fed with the original fp32 data? I guess yes, to get the dynamic range

Environment

TensorRT Version: 7.0.1
GPU Type: 1050ti
Nvidia Driver Version: 451.67
CUDA Version: 10.2
CUDNN Version: 7.6.5
Operating System + Version: Win 10
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):