Retrain frcnn model mAP down


I have use TLT tool train a faster rcnn model with default_spec_darknet53.txt, and tlt-evaluate result (110 epoch) is AP:0.5927,precision:0.5981,recall:0.6683,RPN_recall:0.8842.

Then I use tlt-prune tool generate pruned model of model_1_pruned.tlt, which pruning ratio is 0.28241. And I retrain it, but I find that the AP and recall is get smaller, precision is getting biger.

epoch 1 : AP:0.32, precision:0.52, recall:0.40
epoch 50 : AP:0.29, precision:0.64, recall:0.35
epoch 100 : AP:0.26, precision:0.72, recall:0.31

Whether is because my retrain epoch is not enough?

epoch 130:AP:0.3572,precision:0.5071,recall:0.4699,RPN_recall:0.6920
epoch 160:AP:0.2654,precision:0.5867,recall:0.3486,RPN_recall:0.6511

How many classes did you train?
More, for retraining mAP, if it is out of expected, it is necessary to do more experiments(including finetune the hyperparameters, try different pths). Prune less, then retrain, tune the spec, retrain, etc…

Only one class of person.
My dataset is too big to have a lot of try, as it need one hour for each epoch.
My prune ration is 0.28, do you think it is because I prune too much?

Normally it is the case. So, suggest end user to train a small part of training dataset in order to finetune some hyperparameters.
Then, train it and get a better tlt model which has an expected mAP.
Then set it as the pretrained model and continue to prune–> retrain → etc.

OK, I will train a part of my dataset to evaluate the hyperparameters.