GPU Compatibility with NVIDIA Modulus

I am having a runtime error message with CUDA out of memory. I want to verify that the GPU chip is compatible with this software?


Hi @isabella.hillman

Thanks for your interest in Modulus. If Modulus (more importantly PyTorch) runs at all, then its theoretically compatible. However, Modulus requires decent VRAM (all deep learning does). If you do nvidia-smi this should tell you more information about your card installed.

If this is running on a laptop then I would suggest looking at running Modulus remotely on a cluster. Maybe even a collab notebook. In our install notes there are some lists of GPUs we have tested for which can guide if your hardware is likely to be sufficient.

Thank you @ngeneva for your response, which cleared up some questions.
I am still unsure if the Graphics listed in the above image as “NVIDIA Corporation / Mesa Intel Graphics (ADL GT2)” falls under any of the categories listed in the recommended hardware portion of the installation documentation.
As well, a common suggested change I found is to decrease the batch size. What would be the easiest way to implement this change in the example files? Can I directly change them in gitlab?

Hi @isabella.hillman

You will want to edit the config file for the example of interest on the machine you’re running on. Its a YAML so any text editor / IDE should do the trick. Once changed just re-run the example as you normally do and it will run with the updated settings.