What is the use case for Modulus?

Thank you for releasing Modulus/SimNet - it is a very interesting piece of tech! Im having a bit of a trouble understanding what is the actual use case for it:
For a fix problem/geometry the training takes much longer than obtaining a solution from a standard FEM solver or similar (on my RTX3080 helmholtz problem takes few minutes to train, waveguide2D takes about 10h, both can be solved much faster using Elmer).
I understand that once trained we have a network that we can query for solution a any input, which is not possible for standard FEM solver without remeshing and resolving, but how important is this?
If we do parametrize the input geometry (like with fpga example) than we indeed can try to optimize in that parametrisation space, but to train a network with this extra parametrisation is even slower.

So is the general idea that we parametrize our problem, than train the solution on a GPU cluster for days or weeks but then once we have the solution we can use it for fast optimisation, or anything else that requires a quick forward pass?

In the Simnet paper ([2012.07938] NVIDIA SimNet^{TM}: an AI-accelerated multi-physics simulation framework) when you report on Table 3 the time needed for optimization of the fpga and Simnet is 45000x faster than a FEM solver that does not include the training time, right? What was the training time for it? (on the 8xV100 setup you used)

Thank you for any info regarding my understanding of the use cases, and the training timing.



you may look at the papers from the CRUNCH group


In their papers, they display various use cases for physically informed neural networks (PINNs).

An example: In one of these papers, PINNs allowed for Runge Kutta simulations of extreme order.

From may experience, you are right: PINNs shine in design optimization, when you try different model configurations (subject to physical constraints) based on a multitude of parameters.

From my point of view, the general idea of PINNs for design optimization is not too dissimilar to this recent NVIDIA paper concerning autlod: Appearance-Driven Automatic 3D Model Simplification


I am also interested in a response to this question. Your sentiments match the general feeling I have gotten through working with PINNs for various CFD applications. The compute/GPU time for PINN applications seem like a major barrier to entry compared to existing simulation techniques like FEM/FD/FV.

Current AI methods will be slower than the traditional solvers for training of a single case/geometry. Advantages of Physics-ML technology are for (these are also covered with examples in the Modulus presentation):

  1. Multiple (parameterized) cases where there are several different configurations in the analysis space
  2. Inverse problems (data is given but determine coefficients of PDEs)
  3. Cases where measured data is available but the Physics is too complicated or not fully understood
  4. Data assimilation cases where there is measured or simulated data for some variables but not the entire field (e.g. digital twins, medical imaging, full waveform inversion etc.)
  5. Training AI model on compute intensive part of the solver and call that from the solver during the analysis to improve the speed (constitutive models, turbulence models or radiation viewfactors etc.)
  6. Point clouds can sometimes present higher quality solutions than mesh based simulations (since without convergence studies, mesh based solutions can yield questionable results).

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.