I’m pretty new to high performance GPUs and displays, so apologies if this is a basic question.
I’m a designer working on a museum exhibition with 7x 4K UHD displays (3840x2160). I’m driving all of the displays from one PC running Windows 10 with a pair of A6000s. We are not currently using Mosaic or any other tools to divide the display or run it like a video wall. I’ve got 7x video files, and I’m playing each one on a separate screen. We’re using the native Windows display settings to extend the display to all 8 outputs and to set their output resolution. I’m using media server software called LightAct to route the videos to the different outputs – one video per output, one display per output. The 8th output is driving a 1080p monitor for backend control of the system. We’re currently testing with 7x 4K monitors, but the final installation will use 7x 4K TLCD screens.
I’ve just begun testing the system, and as I enable the displays in my software, the FPS of the output drops dramatically (from 60 fps to 20 fps once I’m running all 7 displays) and I’m trying to figure out why. The frame rate drops regardless of the content on the displays. The software runs with all the content playing at 60 fps until I begin actually routing the videos to the GPU outputs, which makes me think it’s an issue on the hardware side, not with my media server software.
My understanding is that the A6000s can run up to 4x 5K monitors at 60 Hz, and I’m well below the maximum display output of Windows, so it doesn’t seem like I’m pushing too many pixels.
For this kind of application, are there settings for the A6000 I should check, or any other reason there might be issues on the hardware side? Apologies again, this kind of multichannel work is new for me.