Hello all
I am currently migrating some CPU code on GPU and I am facing accuracy issue. I have this simple code
on GPU
__device__ float Simu_ExpoReduitGPU(float const a,
float const b,
float const p1,
curandState * pState){
float random, x, C;
C=p1/(1-expf(-p1*b));
float test = 1-expf(-p1*b) ;
float testp1 = p1/test ;
random=0.5;
x=-(log(1-random*p1/C))/p1;
printf("EXPO %f %f %f %f %f %f %f\n",x,p1,b,C,p1*b,test,testp1) ;
return x;
}
On CPU:
float Simu_ExpoReduit(float const a,
float const b,
float const p1){
float random, x, C;
C=p1/(1-expf(-p1*b));
float test = 1-expf(-p1*b) ;
float testp1 = p1/test ;
//float random= (float) rand()/RAND_MAX; Generateur C++ [0,1]
random=0.5;
x=-(log(1-random*p1/C))/p1;
printf("EXPO %f %f %f %f %f %f %f\n",x,p1,b,C,p1*b,test,testp1) ;
return x;
}
This piece of code is supposed to simulate an exponential law.
When I run this code on CPU I got the following result
EXPO 0.249519 0.015359 0.500000 2.007695 0.007679 0.007650 2.007695
on GPU
EXPO 0.249523 0.015359 0.500000 2.007679 0.007679 0.007650 2.007679
the variable testp1 has different values between CPU and GPU. In fact I will have accpeted some differences on the last digit, but in this case the two last digit are different ?
Is it normal ?
I have read
https://developer.nvidia.com/sites/default/files/akamai/cuda/files/NVIDIA-CUDA-Floating-Point.pdf
and
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
but I haven’t been able to decide if this diff is expected or not.
I am compiling with cuda 5.5.
My GPU is a GTX680 (CC 3.0)
My CPU is a Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
My architecture is x86_64
My kernel is : 3.2.68-server
Thank you very much for your help