Is there a way to program the Tesla platforms (especially the rackmount S870) to feedback to a system’s GPUs so that they work to calculate graphics in games?
well, if they can be used to calculate real life physics, I’m sure you can calculate game physics with them… but due to their price and low availability I don’t think you’ll find much games supporting them :)
I don’t just mean game physics, but also the graphics of the game itself.
Being that they don’t have any monitor outputs and, AFAIK, SLI bridges, I don’t see how you would be able to do so.
Well SLI has worked without bridges on lower model GPUs. The bridge is just a high speed connection. Couldn’t you take the results from the interface card and interlink them with the GPU via the north bridge?
Yes, you can transfer the results from a Tesla GPU to a GeForce or Quadro GPU and use them as geometry (vertex array data) or texture data for display.
Does this work with present drivers (or at least the drivers that will ship with Tesla) and thus only require a game developer to code for the utilization of the Tesla platform?
Not yet, but GPU to GPU transfer is in the 1.1 beta.
I believe David Hoff listed “Optimized peer to peer transfers” as an upcoming feature in his beta announcement.
As far as getting data from one GPU to another today, you can always stage through host memory but there is no way to use DMA for both copies. So you will see a lot of host participation on one of the memcpy’s when you emulate peer-to-peer through host memory.
This is just an artifact of the APIs for pinning memory being tied to the APIs for allocating memory. We are looking at ways to address this limitation in a future release. Tentatively it would take the form of a function that pins an existing virtual memory range on the host.
Why not create a bridge between the two host interface cards like normal SLI? Would it be overwhelmed? Would the host cards need more processing power and a memory element for buffering?
SLI is probably automatically handled by the graphics processor, in the rendering cases. There is no API (yet?) to use it for just transferring memory between cards.
Also, if Tesla cards have no SLI briges as NSmyrnos says, then this is a moot point :)
While your points all seem valid I think you will never see Tesla cards calculating graphics for games.
As I understand it Tesla cards are specifically produced for high performance computation tasks. In that they have certain features and fulfill certain requirements needed in the HPC field. Those features however make the Tesla cards more expensive than your ordinary graphics device and therefore unsuitable for the PC market.
Additionally I believe an important requirement for an imaginary add-on card like you propose would be for it to work out of the box, meaning that the platform provided to game developers should hide and automatically utilize the power of the add-on card. However I don’t see a possibility for this with CUDA at the moment. If the game uses CUDA already… there might be a possibility.