Modulus prediction based on the trained model

How can I make prediction once the model is trained?
Let say I’ve trained the LDC case from the tutorial, and would like to make prediction for various lid velocities. It wasn’t explained in the tutorial. Any clue?

2 Likes

Hi @smraniaki

Please the following post here with some information:

The easiest way is to set up a slimmed down version of your training script with just an inferencer in it and a dataset of the cases you want to test on (then use solver.eval()).

The ov.py in inferencers has some more direct inference methods that avoid the use of the Modulus trainer which could be useful if you prefer a more traditional PyTorch inference script.

1 Like

Hello. I actually have the same question. All links that were posted to gitlab no longer appear to be valid. I cannot find the modulus project in gitlab. Do you have any updated links?

Thank you for your help.

@tstone

Did you register for access to the Modulus GitLab project? If so and you have access to https://gitlab.com/nvidia/modulus/modulus these links should work.

I have a gitlab account but I do not see the option to register access to Modulus Gitlab project. Do you have the registration link? When I clink the URL you provided I get a 404 error “page not found”

Please see the Modulus download page on DevZone and register your GitLab account there. Then you should gain access, please use the following thread if you still have issues:

That’s perfect. I typically train a model and load it into another notebook to perform inference. I saw the load_network function in the trainer.py. Is it possible to pass the configuration file into the network_load function so that I can load the model? If not how is this normally done in modulus?

Yep, thats correct.

The load_model function does the loading of the checkpoint. This method composes the path of the checkpoint file based on i_dir + "/" + model.checkpoint_filename which is the output folder and the checkpoint file name. The information provided to both this function and others in the trainer class are based on your parsed config, in other words you need to go through the trainer class if you want it to be automated using your YAML.

The checkpoint file path is then passed to the model.load function inside the Arch class (basically torch load at this point). So if you want to do things manually you call your model’s .load() function and give it the path of the PyTorch checkpoint.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.