IntegralBoundaryConstraint leads to constant loss

Hello, I’m currently trying to learn modulus. My experience with neural networks is admittedly rusty and I have no prior experience with CFDs.

The Problem:
If I include the following code for an IntegralBoundaryConstraint, which represents the inlet and outlet of a system containing water flow I notice that my loss value becomes constant, which would obviously make it impossible for it to converge. Anyone have tips on how I should troubleshoot this? This code is modifed from the Aneurysm case.

Note that gpm is currently set to a value of 2 and the coordinate system is in mm (coords are normalized to fit within a unit cube).

    # Integral Continuity 1
    integral_continuity = IntegralBoundaryConstraint(
        nodes=nodes,
        geometry=inlet_mesh,
        outvar={"normal_dot_vel": gpm},
        batch_size=1,
        integral_batch_size=cfg.batch_size.integral_continuity,
        lambda_weighting={"normal_dot_vel": 0.1},
    )
    domain.add_constraint(integral_continuity, "integral_continuity")

    # Integral Continuity 2
    integral_continuity = IntegralBoundaryConstraint(
        nodes=nodes,
        geometry=outlet_mesh,
        outvar={"normal_dot_vel": -gpm},
        batch_size=1,
        integral_batch_size=cfg.batch_size.integral_continuity,
        lambda_weighting={"normal_dot_vel": 0.1},
    )
    domain.add_constraint(integral_continuity, "integral_continuity")

Here’s a sample of the training log output

[18:52:55] - [step:      93100] loss:  8.000e-01, time/iteration:  5.398e+01 ms
[18:52:59] - [step:      93200] loss:  8.000e-01, time/iteration:  3.567e+01 ms
[18:53:02] - [step:      93300] loss:  8.000e-01, time/iteration:  3.570e+01 ms
[18:53:06] - [step:      93400] loss:  8.000e-01, time/iteration:  3.567e+01 ms
[18:53:09] - [step:      93500] loss:  8.000e-01, time/iteration:  3.565e+01 ms
[18:53:13] - [step:      93600] loss:  8.000e-01, time/iteration:  3.566e+01 ms
[18:53:17] - [step:      93700] loss:  8.000e-01, time/iteration:  3.560e+01 ms
[18:53:20] - [step:      93800] loss:  8.000e-01, time/iteration:  3.558e+01 ms
[18:53:24] - [step:      93900] loss:  8.000e-01, time/iteration:  3.546e+01 ms
[18:53:27] - [step:      94000] saved checkpoint to outputs/foobar

Update: if I change the lambda_weighting={"normal_dot_vel": 1e-8} parameter to a closer to the same scale as the loss value without the IntegralBoundaryConstraint then I can begin to see changes to the loss value. This leads me to believe one of the following may be true:

  1. Integral constraint weighting should be tuned to match the scale of the the loss function’s output
  2. The model is very bad at maintaining the integral constraint, and so it’s losses completely overpower the rest of the data.
  3. I’ve overthought this whole thing and if I was more patient the loss function would eventually reflect progress.

Log output with adjusted lambda_weighting

[19:56:40] - [step:     192000] loss:  8.052e-08, time/iteration:  3.745e+01 ms
[19:56:44] - [step:     192100] loss:  8.043e-08, time/iteration:  3.620e+01 ms
[19:56:47] - [step:     192200] loss:  8.057e-08, time/iteration:  3.596e+01 ms
[19:56:51] - [step:     192300] loss:  8.174e-08, time/iteration:  3.634e+01 ms
[19:56:55] - [step:     192400] loss:  8.023e-08, time/iteration:  3.686e+01 ms
[19:56:58] - [step:     192500] loss:  8.048e-08, time/iteration:  3.591e+01 ms
[19:57:02] - [step:     192600] loss:  8.020e-08, time/iteration:  3.590e+01 ms
[19:57:06] - [step:     192700] loss:  8.116e-08, time/iteration:  3.662e+01 ms
[19:57:09] - [step:     192800] loss:  8.048e-08, time/iteration:  3.557e+01 ms
[19:57:13] - [step:     192900] loss:  8.064e-08, time/iteration:  3.553e+01 ms

Hi @npstrike

For many problems, balancing the different constraints is often a very tricky and empirical process. Typically, its not that the model is bad at satisfying one constraint or the other, but rather both of them. Tuning is essential to balance these objectives.

Also be aware that the Aneurysm takes a very long time to fully converge.

In summary, I think your take aways are all correct based on my personal experience.

1 Like

Thanks for the input. I’m going to continue messing with it, but it sounds like I’m at least thinking about it in the right way. I’ll mark your response as a solution.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.