About qat model conversion engine

Description

1.Use nvidia qat library to convert the onnx model trained by quantization perception into a trt engine file. If you use trtexc to convert, do you need to add --int8 in the command?
2.Does tensorrt8 and below not support qat model conversion?

Environment

TensorRT Version: 8.2
GPU Type: rtx3070
Nvidia Driver Version: 540
CUDA Version: 11.5
CUDNN Version:
Operating System + Version:
Python Version (if applicable): 3.8
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.12.1
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered