Hi,
i’m using Modulus for a problem similar to Conjugate Heat Transfer
example.
I want to include the temporal variable so i use the equations with time=True.
I decided to follow the Moving Time Window: Taylor Green Vortex Decay
example and manage the training involving time with SequentialSolver().
The overall time window of my problem is very large (about 8 hours).
My questions are:
is the approach with SequentialSolver right for this amount of time? Because there’s also 1D Wave Equation
example and so I had a bit of doubts about which method to use;
if I want to cover the overall time window of my problem how can i do?
Create several time window net is too expensive and the training can be really slow. can this below be a choice?
# set time_window_size = 1 hour
time_window_size = 3600.0
t_symbol = Symbol("t")
time_range = {t_symbol: (0, time_window_size)}
# set 8 time windows so i cover 8 hour
nr_time_windows = 8
and maybe go even further and decrease nr_time_windows and increase time_window_size?
What is the semantical meaning of the initial conditions inferencers calculated in the Taylor Green? I’m referring to this:
for i, specific_time in enumerate(np.linspace(0, time_window_size, 10)):
vtk_obj = VTKUniformGrid(...)
grid_inference = PointVTKInferencer(...)
--->ic_domain.add_inferencer(grid_inference, name="time_slice_" + str(i).zfill(4))
window_domain.add_inferencer(
grid_inference, name="time_slice_" + str(i).zfill(4)
)
As a preface, transient problems are very hard for these type of physics-informed ML without data. You are likely beginning to enter current research areas that will be pushing what is possible with just physics constrained learning if the dynamics is complex. Thus take everything I suggest with a grain of salt.
is the approach with SequentialSolver right for this amount of time? Because there’s also 1D Wave Equation
I like to think its not a matter of the length of time, but the complexity of the dynamics. If the system is pretty smooth/diffusive then the continuous wave approach can work. But for really complex dynamics we found this time moving window can provide a solution by breaking up the problem into smaller chunks. For example in TG, the length scales of the dynamics are dramatically different between start to end which is near impossible for a single network to learn in one go based on our experience.
if I want to cover the overall time window of my problem how can i do?
Create several time window net is too expensive and the training can be really slow. can this below be a choice?
If you’re only using one time window for the entire t range of the problem then this essentially becomes similar to the wave 1D approach. If you have a set of smaller time windows, then you should also be able to train for a smaller number of iterations for each window since the domain is smaller.
Think of this like a numerical solver with a residual criteria. For large timesteps you will need more numerical iterations to converge to a solution, while smaller time steps you may only require a few (granted the total number of iterations across the entire simulation is more for smaller timesteps)
What is the semantical meaning of the initial conditions inferencers calculated in the Taylor Green? I’m referring to this:
The initial condition in this problem is a known analytical field and is placed into its own domain for training (the sequential solver trains over a set of domains). Before we can train a network to learn the dynamics, we need to get it to just reproduce the initial state first. This is adding an inferencer to that domain to sample the prediction at t=0. You’re solution over time will only be as good as your learned initial state.