Hi there,
I read a post of this forum which content was that:
Actually, SLI does use the video memory of both GPUs. SLI has two modes: Split-Frame Rendering (SFR) and Alternate-Frame Rendering (AFR). In the former, each GPU renders a portion of the graphics frame. In the latter, GPUs alternate rendering each frame. In both modes, only one GPU is responsible for pushing the frames out to the display (since a single monitor can’t be connected to both GPUs at once). The SLI connector is simply a “pixel bridge” which the secondary GPU uses to transfer either part of an image (in SFR) or a whole image (AFR) to the primary GPU to be displayed.
Since both GPUs are rendering the scene, both GPUs need all of the scene’s geometry and textures in their memories. Therefore the memory of all GPUs is used in SLI.
Now on to multi-GPU in CUDA. This is not SLI. Because the graphics API constrains the type of computation being done on the GPU, SLI can make assumptions about how to parallelize the application across multiple GPUs. In GPU computing with CUDA, it’s difficult to come up with assumptions like this that apply to all applications that a programmer might write. Therefore, to run on multiple GPUs, the designers of CUDA decided it would be best to enable CUDA programmers to manage the GPUs in the system themselves.
You can do this by creating as many host threads as you have GPUs, and running a separate CUDA context in each thread. The functions cudaGetDeviceCount(), cudaSetDevice(), cudaGetDevice(), cudaGetDeviceProperties(), and cudaChooseDevice() facilitate this. Please see the programming guide and the “multigpu” sample in the CUDA SDK for more details. There are more multi-GPU samples coming in upcoming SDK releases.
Mark
Does anyone know, how I can get an official NVIDIA-Document (or Book) with the SLI-Information in it posted above (highlighted)? I need it for the bibliography of my diploma thesis.
Thanks a lot.
That information should be in the programming guide.