Im following the guide on qat: Improving INT8 Accuracy Using Quantization Aware Training and the NVIDIA Transfer Learning Toolkit | NVIDIA Developer Blog
I have pretrained a yolo_v4 model and pruned it. I would like to retrain the pruned model with qat.
In the config file, if I set “enable_qat: true” and use the argument “pruned_model_path” for pretrained model path, the model seemingly does not train with qat, as tlt-export stills asks me for “cal_image_dir”.
If I use “pretrain_model_path” for pretrained model path, the training ends with the error “ValueError: conv1 has incorrect shape in pretrained model.”
How do I train a yolo_v4 model with qat?