Hi,
I’m new to Nvidia Modulus and I have some questions:
- Does Modulus automatically run on GPU?
- Can Modulus run on CPU? How?
- What are the limitations for choosing a GPU for Modulus? Only the amount of VRAM?
Hi,
I’m new to Nvidia Modulus and I have some questions:
HI @elin.hm20
Thanks for your interest in Modulus:
Does Modulus automatically run on GPU?
If PyTorch see’s GPUs, Modulus will use them
Can Modulus run on CPU? How?
Some parts will work with CPU. If theres no GPU present, packages like Modulus Sym should default to CPU. In Modulus-Launch you may need to change the example script.
What are the limitations for choosing a GPU for Modulus? Only the amount of VRAM?
Typically VRAM is the critical spec, more is better for all deep learning. But of course also speed / flops of the GPU. Newer Nvidia GPUs will run faster naturally and maybe have some improved / different features (E.g. tensor cores). But many problems will still work just fine and converge in a timely manner on many GPU models. For development we use A100 and V100 GPUs.
In general, if a GPU is good for deep learning / PyTorch. It will be good for Modulus.
Thanks for your quick and complete response.
In the website there are some recommendations for GPUs such as A100, A30, A4000, V100, RTX 30xx.
What about A6000 model? Can this model support Modulus?
Hi @elin.hm20
The listed GPUs are ones we have officially tested ourselves and verified the examples work on. Multiple other users have been successful on GPUs outside this list, some times with a few modifications.
Thank you very much for the help.
I’m wondering if Modulus can automatically utilize the GPU for training. I’m curious if there’s a way to make Modulus run on the CPU, and if so, how can I configure it to do that? Lastly, I’m a bit uncertain about the limitations when choosing a GPU for Modulus. Is it primarily about the amount of VRAM, or are travis tritt there other factors I should be considering?