How do I calculate Speed Up with CUDA? (I’m using a GeForce GTX 680 card with 1536 cuda cores)
In the old days, when only normal processor existed (no CUDA cores) the relation was:
Speed Up=(Time of the best sequential algorithm to solve problem X)/(Time for p processors to solve problem X in parallel)
So, if a problem was solved sequentially in 100 seconds, and the same problem was solved with 2 processors in 50 seconds, this means that the speed up is 2. When the Speed Ups is equal to number of processors used this means is theoretically optimal (never the case in real life because of many factors).
Speed Up with 2 processors = 100/50 = 2
Now with 1536 CUDA cores available on GTX 680, my old professor expects a speed up of 1536x, The speed up for my algorithm is only 30x (a waste of resources to his eyes)
What would be your answer for an old supercomputer professor? Is it only (sequential algorithm / parallel algorithm)?
Thanks in advance!