Physical RAM and GPU VRAM capacity matching?

I have heard that memory should be paired with at least 1.5 times the GPU memory. For example, if I want to pair two RTX 4090 cards (24GB each), then my memory should be at least 72GB.

I would like to inquire whether such a large memory capacity is truly necessary. And if I were to use a smaller amount of memory, would it affect performance, particularly from a machine learning perspective?

I would like to know if there would be any issues or performance impacts when the memory capacity falls below that of the graphics card during machine learning. For example, using 8GB of memory with one RTX 4090(24GB) for machine learning purposes.