Hello, I’m wondering if the models trained with Modulus can be exported into ONNX format, and if yes, can they be used outside of Modulus in the context of external C++ application?
Reason I’m asking this: I’d like to integrate trained models with Modulus into Unreal Engine 5 by using their NeuralNetworkInterface which can run inference on ONNX models. Example UE5 project showcasing this: GitHub - cassiebreviu/unreal-onnxruntime-nni-example
I guess in theory it should not be hard to export ONNX model, but will it be “standalone”? I mean, is it possible to use it for inference from other applications (like UE5) outside Modulus context of boundary constraints, interior constraints, or whole network’s knowledge about the boundaries we’ve been using through Tessellation module when working with STL geometries?
Hi @michaltakac
Thanks for your interest. Inference is a major effort we are currently working on in Modulus right now which includes much of what you’re asking. More specifically, as of Modulus version 22.09 there is no explicit inference export functionality written into Modulus (although this is changing very soon with ONNX/TRT functions). However since Modulus is built on PyTorch one could set up a manual export script themselves for most of the models (Models like FNO/PINO/AFNO will not work).
The stand alone aspect is a more challenging problem which we have on our radar but will take some more time. Essentially we are looking at the possibilities of bundling these data/inference pipelines in an easy to deploy package. In the mean time users will need to set up their own PyTorch inference script.
Much of this is being actively developed, first set of inference export features should be arriving next release. So please let us know of any specific inference features you may be interested in for your application!