How to reduce false positive in classification inference?

I am trying to have classification models for 12 classes defect classification. I have a default folder for normal image if there is no defects. But now quite a lot of false positives and not possible to retrain all those false positives in default folder. What could be the best way to reduce those false positives?

What is your training result? Can you paste your spec and full log?
More, for your cases, you can consider:

  1. Is the data enough? How many data for each class?
  2. Change backbone

Yes I have 5500 training data for each class.
This is my training log file.
training.log (1.2 KB)

How about I train again with top_k = 1.
It will reduce accuracy but will reduce false positives also. Then see what are the changes.

The top_k is just for evaluation. It is inside eval_config. It is not related to training.
Can you attach your current training spec too?

My training spec is attached.
classification_retrain_spec.log (1.1 KB)

One thing I like to highlight is that I have 12 classes in training. But only two classes are giving most of the false positives. These two classes are small in sizes. The background is 100x100, but the defect is only about 10x10 in size.

So, suggest you to trigger a new training which will only focus on these two classses.
Try to do finetune for the hyper-parameters.