I have to develop an application which works on 4 screens connected to several graphics cards. After working on it about 2 months I have several questions, which I was not able to solve by myself and hope for a help from the community. You are my last hope.
I have four identical graphics cards, each with output to its own display. I create four windows (one per display). Each window displays the same scene from a different angle (therefore I have 4 different cameras and 4 different viewports). No CPU calculations – only rendering. I expected FPS to be more or less the same as for 1-window-application, since all 4 graphics cards are being used equally low (I have measured it with GPU-Z). But FPS for 4-screens-on-4-cards-application is 3 times less than in case of 1-screen-application! Why?
One more observation: if I run 2-windows-application then:
a) If the windows are displayed on two monitors connected to the same graphics card then FPS is exactly twice lower than for 1-window-application
b) but if two monitors are connected to two different graphics cards then FPS is more than twice lower than for 1-window-application! Why? I expected it to be almost on the level of 1-window-application, because graphics cards are independent from each other.
I would be very happy to see your comments and ideas.