What is the recommend hardware/GPU for Modulus

Hi,
I’m new to Nvidia Modulus and I have some questions:

  1. Does Modulus automatically run on GPU?
  2. Can Modulus run on CPU? How?
  3. What are the limitations for choosing a GPU for Modulus? Only the amount of VRAM?
1 Like

HI @elin.hm20

Thanks for your interest in Modulus:

Does Modulus automatically run on GPU?

If PyTorch see’s GPUs, Modulus will use them

Can Modulus run on CPU? How?

Some parts will work with CPU. If theres no GPU present, packages like Modulus Sym should default to CPU. In Modulus-Launch you may need to change the example script.

What are the limitations for choosing a GPU for Modulus? Only the amount of VRAM?

Typically VRAM is the critical spec, more is better for all deep learning. But of course also speed / flops of the GPU. Newer Nvidia GPUs will run faster naturally and maybe have some improved / different features (E.g. tensor cores). But many problems will still work just fine and converge in a timely manner on many GPU models. For development we use A100 and V100 GPUs.

In general, if a GPU is good for deep learning / PyTorch. It will be good for Modulus.

Thanks for your quick and complete response.
In the website there are some recommendations for GPUs such as A100, A30, A4000, V100, RTX 30xx.
What about A6000 model? Can this model support Modulus?

Hi @elin.hm20

The listed GPUs are ones we have officially tested ourselves and verified the examples work on. Multiple other users have been successful on GPUs outside this list, some times with a few modifications.

Thank you very much for the help.

Hi @ngeneva what do you mean by

I’m unable to run modulus-sym on cpu I’m getting this error:

RuntimeError: CUDA error: CUDA driver version is insufficient for CUDA runtime version
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.