About pytorch QAT and torch to tensorrt

Please provide the following info (check/uncheck the boxes after creating this topic):
Software Version
DRIVE OS Linux 5.2.6
DRIVE OS Linux 5.2.6 and DriveWorks 4.0
DRIVE OS Linux 5.2.0
[y] DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other

Target Operating System
[y] Linux
QNX
other

Hardware Platform
[y] NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other

SDK Manager Version
1.7.0.8846
[y] other

Host Machine Version
[y] native Ubuntu 18.04
other

In tensorrt8.0, is there a tool to transform model from pytorch into tensorrt with int8 type directly. The pytorch model can be trained via QAT, so that we can get a int8 trt file without calibration.

Dear @wang_chen2,
FYI,
DRIVE OS 5.2.0 has TensorRT 6.4. Models generated on TensorRT 8.0 does not work with TensorRT 6.4.

The suggested workflow for pytorch model is pytorch-> ONNX-> TensorRT model. The trtexec tool in TensorRT accept ONNX model and generate TensorRT model.

HI,in the future, we will use orin and tensorrt 8.0. I want to use pytoch 2 tensorrt tool to get a QAT model.
The quantized model can be exported to ONNX and imported by TensorRT 8.0 and later. I want to know after imprting model into tensorrt8.0, the model will inference in int8.