How to Run AI-Powered CAE Simulations

Originally published at: How to Run AI-Powered CAE Simulations | NVIDIA Technical Blog

In modern engineering, the pace of innovation is closely linked to the ability to perform accelerated simulations. Computer-aided engineering (CAE) plays a vital role in the design of optimal and reliable engineering products by helping verify performance and safety. Traditional numerical simulations produce accurate results but often require hours, days, or even weeks to run.…

1 Like

Dear Sirs,

would you mind letting me know whether the procedure described in the article is ready-to-use for a “traditional” CAE analyst?

Can only a portion of a CAE workflow be surrogated? If that’s the case, how can other apps in the workflow be fed by data produced by the surrogate model?

Thank you very much

Thanks for your question Riccardo!

The workflow is designed to be flexible and modular, and how ready-to-use it is depends on which part you want to use. If you’re just running the inference on the trained surrogate model we provide in the NIM on a dataset like DrivAerML, only very basic Python knowledge is needed. If you want to train or fine-tune the model, some ML training experience is required—but the DLI we provide guides you through every step.

You can use all the components together to create an end-to-end workflow, or use each module independently. The surrogate model outputs data as a NumPy array, which can be converted to any format you need and fed into other applications. We show an example with Kit-CAE, but any downstream tool could be used. Outputs are compatible with standard formats like VTU or VTP, making integration straightforward.

I’ve tried KitCAE with Blueprint models and sample CAE files. Getting familiar with the workflow and prep’ing the data as per required should kick-start things.
[image is only for reference]

is this course instructor based/ online ?

It is a comprehensive, 8-hour self-paced online course that takes you step by step through the entire workflow.

1 Like

Thank your for the details👍, cannot wait to get hands-on when available in my region..

Dear Sirs,

I’m building a Physics-NeMo surrogate model for the structural (static) analysis of a cantilever beam.

For each training sample I currently have

  1. an STL file of the undeformed geometry (surface only, no analysis mesh), and

  2. a VTP file that stores the node IDs, their XYZ coordinates, and the corresponding displacements (ux, uy, uz).

During inference I plan to provide only the undeformed STL.

Is this pair of files sufficient and appropriate as training data for Physics-NeMo (DoMINO / X-MeshGraphNet)?
If not, which additional information or field arrays should I include to make the dataset complete?

Thank you in advance for your advice!

Hi @sho2125 @preethampai @riccardo.testi and all,

Since you’re exploring AI-powered CAE simulations, you might want to catch a livestream tomorrow where NVIDIA’s Distinguished Engineer Neil Ashton will walk through how accelerated computing, AI Physics, and industrial digital twins are transforming simulation and engineering workflows.

Watch live at 9 AM PDT:
https://www.linkedin.com/events/aneweraofindustrialengineeringw7380012101932797952/theater/

Also worth noting:
GTC DC — Oct 27–29: https://www.nvidia.com/gtc/dc/
GTC San Jose — Mar 16–19: https://www.nvidia.com/gtc/

Cheers,
Edmar

P.S. Join the NVIDIA Developer Discord to connect with others working on similar projects:

1 Like

@emendizabal , looking forward to attend this session. Thanks for your note.

1 Like