About Batch Normalization in QAT training?

I follow this guide for QAT traininng. https://github.com/NVIDIA-AI-IOT/yolo_deepstream/tree/main/yolov7_qat
I have a question.
Does Batch Normalization is freeze during QAT training?

Regardless of whether Toolkit controls the freeze or not, it is all done by Pytorch natively. https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html

Gamma and beta are not frozen. For all practical matters, this is not very important because QAT starts with a converged model (so gamma and beta are quite stable) and we only fine-tune a very small number of epochs (1+), so there’s not much chance for gamma and beta to diverge (the updates use a small momentum). So freeze or don’t freeze, we think it shouldn’t matter much.

Here is some practice we know from PyTorch:
BN supports track_running_stats to allow users to explicitly control it.
Usually we do not change track_running_stats, it is default true, and the parameter is updated during the training, so before we call the onnx export API torch.onnx.export() the user needs to explicitly call model.eval() first to tell pytorch training or fine-tuning is done and not to change parameters during the export (usually dummy data is used in the torch.onnx.export() api).