Theoretical Model showing speedup of GPU over CPU?

Has anyone seen any type of mathematical, computational model showing the theoretical speedup of the GPU over that of the CPU. I’ve yet to see one and was curious if any of you have come across such a model.

Thanks.

Here’s some discussions I found:
http://forums.nvidia.com/index.php?showtopic=79694

http://forums.nvidia.com/index.php?showtopic=100454

Both simply talk, in general, about Amdahl’s law. But I’m yet to find a computational model detailing the GPU/CPU relationship with an analysis of the theoretical speedup, etc.

For what it’s worth, member “tera” provided an excellent link to an Intel white paper on CPU/GPU performance comparisons in the posted comment at the bottom of page 1 in the “CUDA Kernel self-suspension ?” discussion.

The paper uses implementations of a handful of various well-known algorithms, each optimized for their respective target architectures, to measure and compare peak performance characteristics.

Thanks. I’ve browsed that paper previously and will do so again.

I’m moreso looking to see if anyone has developed a mathematical, computational model showing the efficacy of the GPU/CPU relationship, ideally something that discusses Amdahl’s Law or Gustafson’s Law as well as describes even some of the intricacies, such as the bottlenecks of bandwidth or computation. That’s a lot. But surely it’s been done.?.