Strange float difference in CPU and GPU

Recently I find some strange results in GPU float computing.
Firstly, I use Nsight to look the result in GPU and printf %.10f in CPU
When I set the input for 0.1, the result is 0.1 in GPU while 0.1000000015 in CPU
When I set the input for 0.11, the result is 0.11 in GPU while 0.1099999994 in CPU
When I set the input for 0.111, the result is 0.111 in GPU while 0.1110000014 in CPU
When I set the input for 0.1111, the result is 0.1111 in GPU while 0.1111000031 in CPU.

When I set the input for 0.123456781, the result is 0.12345678 in GPU, while 0.1234567835 in CPU
When I set the input for 0.123456789, the result is 0.12345679 in GPU, while 0.1234567910 in CPU

I wander which results is more accuracy in compute processing.

And I want to the details of these strange results?

I konw the CPU result is fitted for IEEE 754 standart.

Can I think GPU is more accurate than CPU in float computing or not?

Thanks for your help.

I would suggest reading this white paper, plus the references it cites:

http://developer.download.nvidia.com/assets/cuda/files/NVIDIA-CUDA-Floating-Point.pdf

I have read it, but I still cannot figure out the reason why the GPU results is not accompany with CPU

And further whether GPU is more accurate than CPU in our case above?