Bug report: GL_ARB_conservative_depth spec violation

OS: Linux-x86_64
Driver: 387.34

OpenGL 4.5 Core Profile context

I attach a .trace from apitrace which should reproduce the results by replaying the OpenGL commands.
https://github.com/devshgraphicsprogramming/BugTraces/tree/master/NVIDIA/ARB_conservative_depth

Basically I draw a screen-quad over half of the screen at depth=1.0 (the near plane), but in it’s pixel shader I write into gl_FragDepth=0.0; the far plane depth.

Obviously I use reverse-depth as you can see from the values.

If I do not redeclare gl_FragDepth or declare it with “layout (depth_any) out float gl_FragDepth;”, the cows and flowers in the scene do not get occluded (correct).

If I redeclare gl_FragDepth as “layout (depth_unchanged) out float gl_FragDepth;” and violate that by writing a different value into gl_FragDepth than gl_FragCoord.z it does not get output, but the original depth gets written instead, which results in half of the screen being occluded (incorrect).

Now I know I declared it as depth_unchanged, and wrote into it anyway, but the ARB_conservative_depth spec clearly states:
"
When the depth test passes and depth writes are enabled, the value written to the depth buffer is always the value of gl_FragDepth, whether or not it is consistent with the layout qualifier.
"

On another note
P.S. Also the Hi-Z/Early-Z optimization seems not to be enabled with Reverse-Z even when i’m using the consistent depth test function GL_GREATER, gl_FragDepth qualifier depth_less and actually modify gl_FragDepth by only decreasing it.
And it seems to be an issue under DX too https://www.gamedev.net/forums/topic/630218-conservative-depth-output-1-zw-depth-earlydepthstencil-and-early-z/
Its rather strange for me, given that you already keep flags on what fragment shaders do with gl_FragDepth so it would be quite easy to programmatically check in the driver whether the out qualifier and depth func are consisitent and keep early-Z enabled, just like in equivalent cases under a normal Z-buffer setup (GL_LESS plus depth_greater).
I know that the spec states that the implementation may perform optional optimizations and it doesn’t have to, but cmon…