Can't train yolo_v4 with qat

Im following the guide on qat: Improving INT8 Accuracy Using Quantization Aware Training and the NVIDIA Transfer Learning Toolkit | NVIDIA Developer Blog

I have pretrained a yolo_v4 model and pruned it. I would like to retrain the pruned model with qat.
In the config file, if I set “enable_qat: true” and use the argument “pruned_model_path” for pretrained model path, the model seemingly does not train with qat, as tlt-export stills asks me for “cal_image_dir”.
If I use “pretrain_model_path” for pretrained model path, the training ends with the error “ValueError: conv1 has incorrect shape in pretrained model.”

How do I train a yolo_v4 model with qat?

In order to train QAT model, the initial train and re-train must be both QAT.

Thank you for the reply.
Why does the guide then have the following steps?

  1. Train an unpruned object detection model.
  2. Prune the trained model to get the most compute savings possible without compromising accuracy, using tlt-prune .
  3. Retrain this model with QAT enabled.
  4. Evaluate the retrained model to check for recovered accuracy with the unpruned model.

The guide makes it sound like the initial train should be without QAT

Sorry for the inconvenient.

Only detectnet_v2 supports retraining a nonQAT pruned model with enable_qat=True.
FRCNN, SSD, RetinaNet, Yolo_v3 and Yolo_v4 do not support it.

We will clearly state that in the user guide. And also throw an error if enable_qat is set to True and pretrained_weight doesn’t have QDQ nodes.