for the same snippet of code running with the same input but one running on gpu and the other on cpu is giving different results. The inputs are of of double precision, and the code calculates cos and sine of the input. I am using TeslaK80 GPU. Why is the values different for the same code running with same input but on GPU and CPU.
The sin and cos (and other) functions provided by the math library on the GPU are not guaranteed in any way to give precisely the same answer as functions from a host (CPU) math library. However they should give numerically accurate results, and the accuracy is specified:
There are also other reasons in general why floating-point calculation results may differ between CPU and GPU.
This whitepaper discusses some of those:
Finally, when using double precision or single precision, its usually a good idea to make sure that no inadvertent casting or switching from one type to the other is occurring, that floating point constants are also conforming to the desired type, and that where applicable, math functions expecting/returning the appropriate type are being used: