This strikes me as impossible to do in a generic way. I have no idea what metric someone would use to determine how many CPUs you need, other than the general recommendation (but certainly not required) that you have one CPU core per GPU.
The number of CPU and GPU cores you want to have in one machine depends on what application you run on your machine. The amount of host memory in one machine is often limited. If you have too many GPUs plugged in, you may not have enough host memory for all of them to work in optimal speed. On the other hand, too few GPUs result in limited peak performance.
Yeah, that’s seldom wrong for a system dedicated to CUDA, except in special situations. Some systems can get by with less, and if you are doing a lot of CPU work not directly depending on CUDA, you might want more.