GPU cores vs CPU cores?

A co-worker mentioned that he has seen a “spreadsheet” (or executable that lets you enter different parameters)
to configure a GPGPU system with an optimal number of CPU’s and GPU’s.

Both he and I have searched nvidia.com for it without any luck.
Any suggestions on where to find this?

This strikes me as impossible to do in a generic way. I have no idea what metric someone would use to determine how many CPUs you need, other than the general recommendation (but certainly not required) that you have one CPU core per GPU.

At the risk of hijacking this thread, if your using MPI / CUDA you would like to have as heavy nodes as possible? Any thoughts on this?

lintek, linköping? :)

The number of CPU and GPU cores you want to have in one machine depends on what application you run on your machine. The amount of host memory in one machine is often limited. If you have too many GPUs plugged in, you may not have enough host memory for all of them to work in optimal speed. On the other hand, too few GPUs result in limited peak performance.

The only spread sheet that I have come across is Occupancy Calculator…

Thanks for all the input!

A direct link to the Occupant Calculator is in:
[url=“The Official NVIDIA Forums | NVIDIA”]The Official NVIDIA Forums | NVIDIA

This is the spreadsheet that my coworker had seen before
but I was under the impression from talking with him that
the “GPU cores vs CPU cores” issue was address -which it
is not.

I guess the general guideline mentioned above
“one CPU per GPU” is the short and simple answer?

Yeah, that’s seldom wrong for a system dedicated to CUDA, except in special situations. Some systems can get by with less, and if you are doing a lot of CPU work not directly depending on CUDA, you might want more.