How to set enable_autoweighting in training and retraing spec

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc)
Ubuntu, x86, RTX3090
• Network Type (Detectnet_v2)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

I have a small proprietary dataset which contains 4 classes of images, the annotations statistics when doing dataset convert (in tao jupyter note book) is:

iva.detectnet_v2.dataio.dataset_converter_lib: 
Wrote the following numbers of objects:
b'people': 2275
b'dog': 1194
b'cat': 1102
b'horse': 450

so this is a typical imbalanced dataset.
I noticed the properties of class_weight and enable_autoweighting under section cost_function_config both in train and retrain spec are looked like to help the scenario, I want to understanding how to set them for my case.

  1. if set enable_autoweighting to true, then does it mean the class_weight are all meaningless?
  2. does class_weight for different class should just simply reflect the linear relation of their annotation count?
    then the class_weight for my case is: 10, 20, 20, 40 respectively for the 4 classes?
  3. how much improve I can expect after apply these settings?
  1. It is still used for computing cost.
  1. See FAQ in user guide.
  • Distribute the dataset class: How do I balance the weight between classes if the dataset has significantly higher samples for one class versus another? To account for imbalance, increase the class_weight for classes with fewer samples. You can also try disabling enable_autoweighting; in this case initial_weight is used to control cov/regression weighting. It is important to keep the number of samples of different classes balanced, which helps improve mAP.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.