Operating System Synchronization

Is there a specific time that should be allocated for synchronization of the UI and OS? Have installed a second GPU but can’t seem to find anything regarding a specific time the OS requires for synchronization during computing with only a single GPU. I have read in another post the calculations should not block OS for more than 2 seconds but was hoping there is a specification rather than trial and error where the GPU could pause, synchronize, and then continue work even if the computation exceeds 2 seconds.

Hi randy.krouth,

to be honest, I have no idea what you are referring to.

What UI and what OS?
What kind of synchronization?

A bit clarification of what you want to achieve would help a lot.

Thanks!

Hi,
Yes, if there is only a single GPU installed and computation takes longer than 2 seconds the display becomes distorted. Therefore a second card is required to drive the display for full performance of one GPU. However, I was thinking if the computation was setup like a digital clock (square wave) the operating system would then synchronize throughout computation rather than distorting the display.

Which OS and what kind of system setup is it that your Display gets distorted when doing compute loads?

If it is a GPU that is set up to act as the main display adapter in a system then the image should not distort at all, whatever the compute load is.

My setup includes Windows 11 with 2 RTX model GPU’s and no integrated graphics. I was surprised when nothing was mentioned about this in the books I have read. However, when reviewing posts online if a single GPU is installed and is used to drive the display the OS does not manage the GPU therefore the display will not update until computation completes. I didn’t realize this was the case at first but several times I had to restart the computer because the graphics card did not update to the original resolution as the monitor. The display was windowed very small with also a larger display in the background (correct resolution). There are a significant amount of posts in this forum I have read which discuss computation should not run longer than 2 seconds if GPU is also used for the display. However, I was planning to create a function which would synchronize upon chunking computation in the kernel. For instance, process x million lines, synchronize, continue processing where x left off. Therefore the operating system would update the display in between computation.