This is common to many compilers, when arithmetic operations are done on constants, or varioable containing constants (true constant not data from constant memory :) ), they pre-compute the result and use it instead compiling the computation itself.
ie:
int a = 178868;
b = sqrt( (float) a);
becomes, before the translation to machine-language code:
b = 422.927886962890625;
As the compiler run on the CPU, it use it’s floating-point operation, that are IEEE compliant, so the number is correct and naturally equal to the same operation computed indepently on the same CPU (or another IEEE-compliant CPU).
But your GPU is NOT conceived to be fully IEEE-compliant and major complex floating-point operations use shortcuts to be fasters instead to send back results with full-precision correctness. This is absolutely normal, and a correct behavior for a GPU.