How to make nodes with NonDimensionalizer

Hi, I am trying training using the NonDimensionalizer. I am using a pre-trained network with dimensional variables, so I prepare variables in the nodes that have been converted back to dimensional ones by the NonDimensionalizer, but I get a “Failed unrolling graph” error.
Since I cannot provide the code I am using, I have attached a sample code that reproduces the same error.
“Failed unrolling graph” occurs for the variable ‘dummy_eq’, and the following is the flow of the requested variables in the sample.

‘dummt_eq’ in dummy_const
→ ‘dummy’ in DummyPDE
→ ‘dummy_output_scaled’ in nodes using NonDimensionalizer
→ ‘dummy_input_scaled’ in dummy_net
→ (*) ‘temp_scaled’ in nodes using NonDimensionalizer
→ ‘temp’ in nodes 1.
→ ‘u’ and ‘v’ in nodes
→ ‘x’ and ‘y’ in flow_net

Here, using L.81 instead of L.82 in the sample code (‘dummy_input_scaled’ is calculated from ‘temp’ without via ‘temp_scaled’, i.e., skipping the * part of the above flow), the error will not occur.

I assume that the variables converted by NonDimensionalizer cannot be used as inputs to the network.
How can I work around this error?

ldc_2d.py (5.9 KB)

Hi @user106225

For future reference pasting the graph error:

####################################
could not unroll graph!
This is probably because you are asking to compute a value that is not an output of any node
####################################
invar: [x, y, normal_x, normal_y, area]
requested var: [dummy_eq]
computable var: [x, y, normal_x, normal_y, area, continuity, u, v, p, temp, momentum_x, momentum_y]
####################################
Nodes in graph: 
node: Sympy Node: continuity
evaluate: SympyToTorch
inputs: []
derivatives: [u__x, v__y]
outputs: [continuity]
optimize: False
node: Sympy Node: momentum_x
evaluate: SympyToTorch
inputs: [u, v]
derivatives: [p__x, u__x, u__x__x, u__y, u__y__y]
outputs: [momentum_x]
optimize: False
node: Sympy Node: momentum_y
evaluate: SympyToTorch
inputs: [u, v]
derivatives: [p__y, v__x, v__x__x, v__y, v__y__y]
outputs: [momentum_y]
optimize: False
node: Sympy Node: dummy_eq
evaluate: SympyToTorch
inputs: [dummy]
derivatives: []
outputs: [dummy_eq]
optimize: False
node: Arch Node: flow_network
evaluate: FullyConnectedArch
inputs: [x, y]
derivatives: []
outputs: [u, v, p]
optimize: True
node: Arch Node: dummy_network
evaluate: FullyConnectedArch
inputs: [dummy_input_scaled]
derivatives: []
outputs: [dummy_output_scaled]
optimize: True
node: Sympy Node: temp
evaluate: SympyToTorch
inputs: [u, v]
derivatives: []
outputs: [temp]
optimize: False
node: Sympy Node: dummy_input_scaled
evaluate: SympyToTorch
inputs: [temp_scaled]
derivatives: []
outputs: [dummy_input_scaled]
optimize: False
node: Node
evaluate: _Scale
inputs: [u, v, p, temp, dummy_output_scaled]
derivatives: []
outputs: [u_scaled, v_scaled, p_scaled, temp_scaled, dummy]
optimize: False
####################################

Is very insightful, but thanks for posting your script with a minimal working example. From looking at your code I think the problem is you have a cyclic dependency in your graph when using you use the non-dim node (swapping out for line 81 removed the non-dim node from your graph). This type of error is a little more tricky to spot than other graph issues.

I think the issue is with these three nodes:

+ [dummy_net.make_node(name="dummy_network")]
+ [Node.from_sympy(log(Symbol('temp_scaled')), 'dummy_input_scaled')]
+ Scaler(
    ['u', 'v', 'p', 'temp', 'dummy_output_scaled'],
    ['u_scaled', 'v_scaled', 'p_scaled', 'temp_scaled', 'dummy'],
    ['m/s', 'm/s', 'm^2/s^2', 'm/s', 's/m'],
    nd,
).make_node()

In short:

  • Your dummy network needs dummy_input_scaled
  • Your SymPy node needs temp_scaled to get dummy_input_scaled
  • Your Scaler node needs dummy_output_scaled to get temp_scaled which is the output of dummy network

The result is a cyclic graph which cannot be executed. Removing dummy_output_scaled from the input of the Scaler node allows the script to build the graph since this makes the symbolic graph a DAG. (Granted this now means dummy variable is now not produced so I would create a separate Scalar node to go from dummy_output_scaled to dummy).

Hi, ngeneva.

Thank you for your reply.

I think it is not a cyclic graph.
My example means as follows

  • My dummy network needs dummy_input_scaled
  • My Sympy node needs temp_scaled to get dummy_input_scaled
  • My Scaler node needs temp(not dummy_output_scaled) to get temp_scaled which is the output of line 80.

Hi @user106225

The problem is that when you create the Scaler node, it sees all of the listed inputs (['u', 'v', 'p', 'temp', 'dummy_output_scaled'],) as the input to the node of the symbolic graph. It does not do one-by-one inputs. It creates one node.

So it is a cyclic graph because dummy_output_scaled is a required input to the same node that computes temp_scaled. Did you try my solution of removing dummy_output_scaled and running the code?

I could understand why this is a cyclic graph.
I tried your solution and was able to run the code without errors.

I appreciate your kind support.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.