In only 1 loop GTX280 still faster than E8500? ...a test.

Xeon Isis :book:
2008/7/31 下午 05:26


#include <stdio.h> // stdio functions are used since C++ streams aren’t necessarily thread safe
#include <cutil.h>
#include
using namespace std;
int main(int argc, char *argv)
{
printf(“hello word\n”);
int a = 0,b = 0,c = 0,d = 0,e = 0,f = 0,g = 0;
for(int i = 0; i < 2147483647 ; i++){a += 1; if ( a % 5 == 0){ a = a-1;}}
g = a ;
cout <<g << endl;
return 0;
}
********************************************** :oops:
Alough there is only a loop GTX280 still faster than E8500 about 40%.
It’s a ploblem that in only one loop.GTX280 GPU engine work at 602NHz but E8500 at 3166MHz.
In the code , the variables must be decleared at host memory, aren’t they?
WHY ?Why a 602MHz engine faster than a 3166MHz core? :shock:
(someone tell me how to post a image?)
the image is temporarily here:http://cid-e699f8d8746c68be.skydrive.live.com/self.aspx/CUDA/2008-07-31|_173904.jpg

Is that a picture of the windows task manager?
the windows task manager only shows CPU usage, not GPU usage. CUDA kernels are asynchronous so, when you execute a kernel, the control returns to the CPU before the kernel has finished.
That may be the problem