Deploying/using a trained model | Post-Training

I’m currently trying to familiarize myself with the modulus platform (v22.09) and am a little confused as to how I am supposed to utilize the final models. Let’s say I’ve got a good model that I trained (foo.pth), and I now want to use it for generating predictions. The suggested modulus workflow documentation ends on the “training begins” step:

https://docs.nvidia.com/deeplearning/modulus/modulus-v2209/user_guide/basics/modulus_overview.html

How can I run the finalized model? E.g. is it the same as running a PyTorch save (i.e. predictions are not implemented via the modulus package), if so how do I know how to structure the inputs given that the definition was handled by modulus? I couldn’t find an example the demonstrates how to use the outputted models, e.g. via a model.predict([inputs]) function.

Thank you.

1 Like

Hi @npstrike

There’s a few threads in the forums with some discussions on this. The summary of it i,s its up to the user. We have some built in methods that allow users to perform inference inside the the symbolic ecosystem (E.g. you need gradient calculations). We have also had users that just take the model, load the architecture manually then just treat it like a PyTorch model.

There is some nuance with manually running a Modulus-Symbolic network like a PyTorch model (the _impl property can be userful here). The reason for this additional step is that Modulus architectures are set up to work with dictionaries for the symbolic graph that gets build during training.

Thank you for the followup. I think it would be beneficial for the samples to include running solver.eval() as a final step as I did eventually find this by looking through the source code, but it wasn’t clear to me if it’s something I’m intended to be using.
I think I have what I need in order to move forward with my projects though, thanks :)