QAT for FRCNN experiment causing 0 mAP

I am currently running a Faster RCNN experiment where I am attempting to use QAT during training. When QAT is enabled, mAP scores for all classes are 0. Loss is also off, either dipping to very low values upon first epoch or jumping from low to high values throughout experiment regardless of learn rate. An identical experiment ran without QAT returns AP scores well above 60% after the first epoch. It should be noted that RPN_recall with QAT on is near the same as with QAT turned off, in the high 90%. It is only recall, precision and accuracy that are all 0 throughout the experiment. Experiment spec below:
experiment_spec.txt (6.4 KB)

May I know if it can be reproduced with official released jupyter notebook? It is training with public KITTI dataset.

The jupyter notebook seems to have worked fine with the KITTI data both with and without QAT. I will slowly adjust the experiment spec for my custom experiment to determine the exact cause of the issue and I will get back

It seems that the main issue revolved around the learn rate. QAT required a significant decrease in the initial LR values in order to avoid overfitting. For now the problem is resolved in this way

Thanks for sharing!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.