So I’m working on a gaming cockpit with a GPU system that will display 4 Out-The-Window scenes. This cockpit contains 4 windows, each spaced apart by a few inches (and slightly different orientations). I’m thinking I’ll mount a separate 27-inch monitor to the outside of each window to simulate each Out-The-Window view. I’ve already got the software that will drive the scene out each window, it’s a custom OpenGL application with plenty of camera controls that will let me customize its start with a specific pre-defined camera view. So with 4 cameras defined, I can start 4 separate instances of the OpenGL application, where each instance comes up with a different camera activated, and hence a different view. Each instance is designed to connect to a multi-cast port to grab data that specifies the vehicle’s position and attitude in the gaming world. The vehicle’s position and attitude is handled by a separate app running on another workstation on the same network. The joysticks for attitude control and thrust control will be connected to this other computer. I am planning on using RedHat Linux on all machines.
Now that the setup is out of the way, here’s the questions:
- What is the best system design for this? To create a single workstation with 4 GPUs to drive all 4 scenes, or build individual workstations for each Out-The-Window scene?
- If I build a single system with 4 GPUs, I would configure each monitor on a separate GPU and extend the user’s desktop across all monitors. Start each scene on a specific monitor. Do I get more performance out of the GPUs if I run in non-SLI mode? (Because each monitor is on a separate GPU and I don’t want to render in Alternate-Frame-Rendering or any SLI-mode)
- It’s easy to monitor CPU utilization, but how do I monitor GPU utilization?
- If I start each openGL app instance on a separate monitor, does the GPU connected to that monitor handle the rendering load for all graphics within its display area? In effect, leaving all other GPUs free to handle their own particular scenes?
- I’m thinking that each monitor will render at 1920x1080. I’m looking at the GTX Titan for this. What are the most important specs on a videocard for performance? I’m guessing that once videocards exceed a Gigabyte of VRAM, the next spec to look at is memory bandwidth?
- If I use a motherboard with 8-dual Intel core i7s, and each instance of the scene app grabs one of the 8-cores for dedicated use, in a non-SLI configuration does each CPU work with a different GPU to render a frame? Or will each CPU try to use a particular GPU, like the one in slot 0?
- Will running concurrent scene apps cause more use of a singular GPU or would all the GPUs do some kind of load balancing?
Thanks for the help…just trying to build the best and most efficient system I can.