QAT using pytorch-quantization cause accuracy loss after exporting to onnx

Description

I use pytorch-quantization to do QAT for a pointpillar model, it works fine during pytorch training, however, when I export the torch model to onnx, accuracy degrades badly. In my opinion, this process should have no loss of accuracy at all.

Environment

TensorRT Version:
8.5.2
NVIDIA GPU:
RTX3060
NVIDIA Driver Version:
470.161.03
CUDA Version:
11.3
CUDNN Version:
8.6

Operating System:
Ubuntu 20.04
Python Version (if applicable):
3.8
PyTorch Version (if applicable):
1…11
pytorch-quantization:
2.1.2

Request you to raise the concern on Issues · pytorch/pytorch · GitHub