Suppose I have a Non-Graphical application (say FFT of 10000 data points) that I wrote on 8800GT. I got X times speed-up compared to optimized CPU code.
In which of the following cases I will get speed up more than X times:
1- Running the same code (without any modifications) on a GPU with more cores than in 8800GT, such as GTX200 series Tesla
2- Running the same code (without any modifications) on two GPUs in SLI mode (two 128 core GPUs).
And by the way what will happen in case my application were Graphical, rendering etc?
I just need a short answer to verify my understanding.
Thanks in advance for your time!