100% cpu use when calling glReadPixels during off screen render

I am trying to perform an off-screen render, read the results into ram, and based on that update my on-screen render. However, one of my cpu cores is railed at 100%.

I have boiled the issue down to a relatively short test case:
/* Frame buffer */
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, color_buf);

glViewport(0, 0, 1, 1);

glClearColor(0, 0, 0, 1);
glReadPixels(0, 0, 1, 1, GL_BGRA, GL_UNSIGNED_BYTE, data);

/* Window buffer */
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glViewport(0, 0, 800, 600);

glClearColor(0, 0, 0.2, 1);


glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(pos), pos, GL_STATIC_DRAW);

glVertexAttribPointer(pos_loc, 3, GL_FLOAT, GL_FALSE, 0, 0);

glDrawArrays(GL_TRIANGLES, 0, 3);


If I comment out the call to glReadPixels the cpu usage drops to nearly zero. The glReadPixels call takes about 16 ms to complete which corrisponds to my monitor 60Hz refresh rate. It is acting like the gpu won’t allow the read until the next monitor refresh and cpu is in a spin-loop waiting for this to happen. I would expect that behavior for an on-screen frame buffer, but not for off-screen. I have also tried using a pixel buffer object, that has a similar issue with glMapBuffer. Is there an opengl call that I am missing?

I am running on linux with a GTX1650 using driver version 435.21. My cpu is an AMD Ryzen 5 3600. I had similar code working previously using Intel integrated graphics.