Add constraints and inferencers for Navier-Stokes equations (time=True)

Hi,
i am using modulus for a problem similar to the “conjugate heat transfer” example.
First i started by setting the equations with time=False like in the example. Everything worked fine. For my problem i need to insert the time variable so i decided to switch to time=True. This resulted in the following error:

[10:19:24] - JIT using the NVFuser TorchScript backend
[10:19:24] - JitManager: {'_enabled': True, '_arch_mode': <JitArchMode.ONLY_ACTIVATION: 1>, '_use_nvfuser': True, '_autograd_nodes': False}
[10:19:24] - GraphManager: {'_func_arch': False, '_debug': False, '_func_arch_allow_partial_hessian': True}
####################################
could not unroll graph!
This is probably because you are asking to compute a value that is not an output of any node
####################################
invar: [x, y, z, normal_x, normal_y, normal_z, area]
requested var: [u, v, w]
computable var: [x, y, z, normal_x, normal_y, normal_z, area, continuity]
####################################
Nodes in graph: 
node: Sympy Node: continuity
evaluate: SympyToTorch
inputs: []
derivatives: [u__x, v__y, w__z]
outputs: [continuity]
optimize: False
node: Sympy Node: momentum_x
evaluate: SympyToTorch
inputs: [sdf, u, v, w]
derivatives: [p__x, sdf__x, sdf__y, sdf__z, u__t, u__x, u__x__x, u__x__y, u__x__z, u__y, u__y__y, u__y__z, u__z, u__z__z, v__x, v__x__x, v__x__z, v__y, v__y__x, v__y__y, v__y__z, v__z, v__z__z, w__x, w__x__x, w__x__z, w__y, w__y__x, w__y__y, w__y__z, w__z, w__z__z]
outputs: [momentum_x]
optimize: False
node: Sympy Node: momentum_y
evaluate: SympyToTorch
inputs: [sdf, u, v, w]
derivatives: [p__y, sdf__x, sdf__y, sdf__z, u__x, u__x__x, u__x__y, u__x__z, u__y, u__y__y, u__y__z, u__z, u__z__z, v__t, v__x, v__x__x, v__x__z, v__y, v__y__x, v__y__y, v__y__z, v__z, v__z__z, w__x, w__x__x, w__x__z, w__y, w__y__x, w__y__y, w__y__z, w__z, w__z__z]
outputs: [momentum_y]
optimize: False
node: Sympy Node: momentum_z
evaluate: SympyToTorch
inputs: [sdf, u, v, w]
derivatives: [p__z, sdf__x, sdf__y, sdf__z, u__x, u__x__x, u__x__y, u__x__z, u__y, u__y__y, u__y__z, u__z, u__z__z, v__x, v__x__x, v__x__z, v__y, v__y__x, v__y__y, v__y__z, v__z, v__z__z, w__t, w__x, w__x__x, w__x__z, w__y, w__y__x, w__y__y, w__y__z, w__z, w__z__z]
outputs: [momentum_z]
optimize: False
node: Sympy Node: nu
evaluate: SympyToTorch
inputs: [sdf]
derivatives: [u__x, u__y, u__z, v__x, v__y, v__z, w__x, w__y, w__z]
outputs: [nu]
optimize: False
node: Sympy Node: normal_dot_vel
evaluate: SympyToTorch
inputs: [normal_x, normal_y, normal_z, u, v, w]
derivatives: []
outputs: [normal_dot_vel]
optimize: False
node: Arch Node: flow_network
evaluate: FullyConnectedArch
inputs: [x, y, z, t]
derivatives: []
outputs: [u, v, w, p]
optimize: True
####################################
Error executing job with overrides: []
Traceback (most recent call last):
  File "fullyConnected_5_256_time_inferencer_flow.py", line 70, in run
    inferencer_1 = PointwiseInferencer(
  File "/modulus/modulus/domain/inferencer/pointwise.py", line 66, in __init__
    self.model = Graph(
  File "/modulus/modulus/graph.py", line 94, in __init__
    raise RuntimeError("Failed Unrolling Graph")
RuntimeError: Failed Unrolling Graph

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

Below is my code:

nu = 1.9531 * pow(10, -5) 
        rho = 3.233386 

        geo = MyGeometry()

        ze = ZeroEquation(nu=nu, dim=3, time=True, max_distance=geo.channel_radius)
        ns = NavierStokes(nu=ze.equations["nu"], rho=rho, dim=3, time=True)
        navier_stokes_nodes = ns.make_nodes() + ze.make_nodes()

        normal_dot_vel = NormalDotVec()

        input_keys = [Key("x"), Key("y"), Key("z"), Key("t")]

        flow_net = FullyConnectedArch(input_keys=input_keys, output_keys=[Key("u"), Key("v"), Key("w"), Key("p")], nr_layers=5, layer_siz>

        flow_nodes = (navier_stokes_nodes + normal_dot_vel.make_nodes() + [flow_net.make_node(name="flow_network")])

        inlet_vel = 3.54

        flow_domain = Domain()

        x = Symbol("x")
        y = Symbol("y")
        z = Symbol("z")
        t = Symbol("t")

        inferencer_1 = PointwiseInferencer(
                invar=geo.longitudinal_section_plane_x0.sample_boundary(50000),
                output_names=["u", "v", "w"],
                nodes=flow_nodes,
        )
        flow_domain.add_inferencer(inferencer_1, "longitudinal_section_1")

        # Inlet
        constraint_inlet = PointwiseBoundaryConstraint(
                nodes=flow_nodes,
                geometry=geo.square_inlet,
                outvar={"u": 0, "v": 0, "w": inlet_vel},
                batch_size=cfg.batch_size.Inlet,
                lambda_weighting={
                "u": 1.0,
                "v": 1.0,
                "w": 1.0,
                },  # weight zero on edges
                criteria = pow(x, 2) + pow(y, 2) <= pow(geo.channel_radius, 2),
                batch_per_epoch=5000,
        )
        flow_domain.add_constraint(constraint_inlet, "inlet")

After this, like in “Conjugate heat transfer” example, i define other constraints for the full geometry. Then i call the solve() function:

        # make solver
        flow_slv = Solver(cfg, flow_domain)

        # start flow solver
        flow_slv.solve()

        flow_slv.eval()

Even if i change the line input_keys = [Key("x"), Key("y"), Key("z"), Key("t")] in input_keys = [Key("x"), Key("y"), Key("z")] the inferencers and the constraints run but the solver have the following error:

[10:31:06] - attempting to restore from: outputs/fullyConnected_5_256_time_inferencer_flow
[10:31:06] - optimizer checkpoint not found
[10:31:06] - model flow_network.0.pth not found
Error executing job with overrides: []
Traceback (most recent call last):
  File "fullyConnected_5_256_time_inferencer_flow.py", line 237, in run
    flow_slv.solve()
  File "/modulus/modulus/solver/solver.py", line 159, in solve
    self._train_loop(sigterm_handler)
  File "/modulus/modulus/trainer.py", line 521, in _train_loop
    loss, losses = self._cuda_graph_training_step(step)
  File "/modulus/modulus/trainer.py", line 702, in _cuda_graph_training_step
    self.loss_static, self.losses_static = self.compute_gradients(
  File "/modulus/modulus/trainer.py", line 54, in adam_compute_gradients
    losses_minibatch = self.compute_losses(step)
  File "/modulus/modulus/solver/solver.py", line 52, in compute_losses
    return self.domain.compute_losses(step)
  File "/modulus/modulus/domain/domain.py", line 133, in compute_losses
    constraint.forward()
  File "/modulus/modulus/domain/constraint/continuous.py", line 116, in forward
    self._output_vars = self.model(self._input_vars)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
    return forward_call(*input, **kwargs)
  File "/modulus/modulus/graph.py", line 220, in forward
    outvar.update(e(outvar))
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1186, in _call_impl
    return forward_call(*input, **kwargs)
  File "/modulus/modulus/eq/derivatives.py", line 84, in forward
    grad_var = self.prepare_input(input_var, grad_sizes.keys())
  File "/modulus/modulus/eq/derivatives.py", line 72, in prepare_input
    return [input_variables[x] for x in mask]
  File "/modulus/modulus/eq/derivatives.py", line 72, in <listcomp>
    return [input_variables[x] for x in mask]
KeyError: 't'

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

Since there are no examples in the guide with time=True i don’t understand where i went wrong.
I appreciate any suggestions.
Thanks

Hi @tom_02

The reason you’re getting that graph error is that the transient N-S equations requires the additional input variable t, which isn’t provided by the geometry (only x,y,z) or your neural network (u,v,w,p). So we need to somehow provide that variable when creating that constraint.

This is one of the purposes of the parameterization field in these point constraints that use the geometry module. This can be used in two ways: one is to control parameterized geometry, two is to provide additional variables in the inputs of constraints.

Let say you want to solve between time [0,1] during training. You should be achieve this via:

constraint_inlet = PointwiseBoundaryConstraint(
                nodes=flow_nodes,
                geometry=geo.square_inlet,
                outvar={"u": 0, "v": 0, "w": inlet_vel},
                ...
                parameterization={t: (0 1)}, # t is a sympy symbol!!
                batch_per_epoch=5000,
        )

An example of this (granted for a specific time) is seen in the Taylor Green example. Using parameterization ranges is also demonstrated in several examples such as FPGA heat sink.

Let me know if this doesn’t work.

1 Like

Hi @ngeneva

I followed the instructions in the taylor_green example, I have included the time variable and added the “time_window_net” but it seems there are still problems, now with the “MovingTimeWindowArch” object.

Error received:

[09:40:51] - JIT using the NVFuser TorchScript backend
[09:40:51] - JitManager: {'_enabled': True, '_arch_mode': <JitArchMode.ONLY_ACTIVATION: 1>, '_use_nvfuser': True, '_autograd_nodes': False}
[09:40:51] - GraphManager: {'_func_arch': False, '_debug': False, '_func_arch_allow_partial_hessian': True}
Error executing job with overrides: []
Traceback (most recent call last):
  File "fullyConnected_5_256_time_inferencer_flow.py", line 62, in run
    time_window_net = MovingTimeWindowArch(flow_net, time_window_size)
  File "/modulus/modulus/models/moving_time_window.py", line 48, in __init__
    self.arch = copy.deepcopy(arch)
  File "/opt/conda/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/opt/conda/lib/python3.8/copy.py", line 270, in _reconstruct
    state = deepcopy(state, memo)
  File "/opt/conda/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/opt/conda/lib/python3.8/copy.py", line 230, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/opt/conda/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/opt/conda/lib/python3.8/copy.py", line 296, in _reconstruct
    value = deepcopy(value, memo)
  File "/opt/conda/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/opt/conda/lib/python3.8/copy.py", line 270, in _reconstruct
    state = deepcopy(state, memo)
  File "/opt/conda/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/opt/conda/lib/python3.8/copy.py", line 230, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/opt/conda/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/opt/conda/lib/python3.8/copy.py", line 296, in _reconstruct
    value = deepcopy(value, memo)
  File "/opt/conda/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/opt/conda/lib/python3.8/copy.py", line 270, in _reconstruct
    state = deepcopy(state, memo)
  File "/opt/conda/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/opt/conda/lib/python3.8/copy.py", line 230, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/opt/conda/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/opt/conda/lib/python3.8/copy.py", line 296, in _reconstruct
    value = deepcopy(value, memo)
  File "/opt/conda/lib/python3.8/copy.py", line 172, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "/opt/conda/lib/python3.8/copy.py", line 270, in _reconstruct
    state = deepcopy(state, memo)
  File "/opt/conda/lib/python3.8/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/opt/conda/lib/python3.8/copy.py", line 230, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "/opt/conda/lib/python3.8/copy.py", line 161, in deepcopy
    rv = reductor(4)
  File "/opt/conda/lib/python3.8/site-packages/torch/jit/_script.py", line 62, in _reduce
    raise pickle.PickleError("ScriptFunction cannot be pickled")
_pickle.PickleError: ScriptFunction cannot be pickled

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

Below my code:

        nu = 1.9531 * pow(10, -5) 
        rho = 3.233386 

        time_window_size = 1.0
        t = Symbol("t")
        time_range = {t: (0, time_window_size)}
        nr_time_windows = 10

        geo = Autoclave()

        ze = ZeroEquation(nu=nu, dim=3, time=True, max_distance=geo.channel_radius)
        ns = NavierStokes(nu=ze.equations["nu"], rho=rho, dim=3, time=True)
        navier_stokes_nodes = ns.make_nodes() + ze.make_nodes()

        normal_dot_vel = NormalDotVec()

        input_keys = [Key("x"), Key("y"), Key("z"), Key("t")]

        flow_net = FullyConnectedArch(input_keys=input_keys, 
                output_keys=[Key("u"), Key("v"), Key("w"), Key("p")], 
                nr_layers=5, 
                layer_size=256)

        time_window_net = MovingTimeWindowArch(flow_net, time_window_size)

        flow_nodes = (navier_stokes_nodes +
                normal_dot_vel.make_nodes() +
                [time_window_net.make_node(name="time_window_network")])

        inlet_vel = 3.54

        flow_domain = Domain("Flow conditions")
        window_domain = Domain("Window")

        x = Symbol("x")
        y = Symbol("y")
        z = Symbol("z")

Then i added inferencers and constraints to the domain.

I don’t undertsand the reason of the error, because the code is copied from the example.

Thanks for your help.

Hi @tom_02

If you’re deciding to try the moving time-window example I would suggest turning off JIT in your config and try again. Seems this error is torchscript related.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.