Implementing LearningRateAnnealing in loss.aggregator

Hello all,

I am struggling with implementing LearningRate Annealing algorithm during the training.

I founded that modulus.loss.aggregator has LRAnnealing class so that we can implement Learning rate annealing for loss aggregation, but how can I use this LRAnnealing class in Constraint class?

For example, if I have two constraints,
BC = PointwiseBoundaryConstraint(…)
interior = PointwiseInteriorConstraint(…)
and want to balance the loss between BC and interior with learning rate annealing algorithm, how do I implement it?

I also founded custom_aggregator.py at modulus.examples.turbulent_channel directory, but this aggregator class was not used in example code.

Any suggestions, links, or advice(any sample code for loss aggregation) would be highly appreciated.

Thanks.

1 Like

Hi @heechangkim , yes the LR Annealing algorithm is already implemented in Modulus. To use this, all you need to do is to set the loss entry in your .yaml config file to lr_annealing. For example, for the Helmholtz example:

defaults :
  - modulus_default
  - arch:
      - fully_connected
  - scheduler: tf_exponential_lr
  - optimizer: adam
  - loss: lr_annealing  # <---- HERE
  - _self_

scheduler:
  decay_rate: 0.95
  decay_steps: 200

training:
  rec_results_freq : 1000
  rec_constraint_freq: 2000
  max_steps : 20000

batch_size:
  wall: 800
  interior: 4000