Bug in GLSL expression evaluation

I have been testing my GLSL parser against the one in the OpenGL driver using randomly generated expressions, and came upon one mismatch that seemed rather puzzling. Reduced to the shortest form where the bug still manifests, it is as follows:

max(0.481, clamp(atan(mix(2.816, 1.971, 0.714), 5.446), -96.827, -65.935))

Expected result is 0.481, but the one I got on both an Intel and Nvidia GPU is -65.935. However, it seems it evaluates correctly in WebGL. I will post more such cases if I encounter them.

I have found several others exhibiting seemingly the same issue:

clamp(min(atan(-2.452, sign(degrees(0.781))), -1.379), 98.959, 284.884)
float((int(atan(-12.280, length(asin(vec4(0.164, 0.226, 0.511, 0.401)))))%-27))
clamp(min(-3.569, atan(-5.977, smoothstep(-5.264, 11.859, 3.366))), 124.313, 233.453)
clamp(clamp(atan(smoothstep(0.887, 15.829, 13.970), -15.574), 57.331, 68.378), 97.279, 323.435)

As you can see, all of them involve the atan function (the atan2 variant - the one with two arguments), so it seems there might be something wrong with it. Most also contain the clamp function (the clamp range is always in ascending order though, so that is not the issue), but not all of them.

What’s interesting is that simplifying the expressions further in any way stops the bug from appearing, but changing up the numbers does not. Also, when uniforms are involved instead of literals and the expression cannot be simplified in compile time, the bug also seems to disappear, so I suspect it might originate in some static expression simplifier. This means that this probably isn’t too serious, since nobody’s going to put this kind of stuff in their shaders, but it might reveal some deeper underlying problem.