Using Tesla cards in a large linux box For accelerating internal GPU buffer?

Hi all.

I’ve got a visual computing/GL/GPU issue to solve.

Here is the situation. I have a very heavily engineered Dell R910 system used for interactive HPC workloads. 100’s of GB’s of RAM, 64 CPU execution threads, solid state disk, 10GbE connected etc.

One of the fatal flaws of the R910 is it’s inability to contain any form of useful GPU. I’ve got a lot of users who want to do some interesting things in the GPGPU space, but others who just want some visual grunt over their X11-forwarded sessions.

The box is running the most current CentOS (6.2).

My local Dell crew have offered me a Dell GPU cluster device:

It effectively allows me to chain it up to the PCI-E slots of the R910 and put an external Tesla compute chassis into the R910.

But, the operative questions:

  1. Will doing this, under Linux, expose the GPU’s inside the Tesla chassis as a hardware acceleration mechanism for video buffers/large image manipulation using OpenGL over X11 forwarded sessions? or…
  2. It is only useful in the programmatic compute sense, if I were addressing it’s stream processors from something like MatLab, if I were using Cuda/OpenCL wrapper classes?

Effectively, I want to both. Needs to give the system at least some ability to “use a GPU to display remote graphics”, but also needs to act as a powerful computational nexus.

Thanks, all.

z