i have read in previous topics (in the 2007 year, about) that SLI and CUDA are orthogonal concept, and if you have N-way SLI with CUDA you see only 1 GPU. So you have to disable it for managing N GPU manually.
Nevertheless, in the CUDA 5.5 Programming guide it is written that:
[u][i]"In a system with multiple GPUs, all CUDA-enabled GPUs are accessible via the CUDA driver and runtime as separate devices. There are however special considerations as described below when the system is in SLI mode.
First, an allocation in one CUDA device on one GPU will consume memory on other GPUs that are part of the SLI configuration of the Direct3D or OpenGL device. Because of this, allocations may fail earlier than otherwise expected.
Second, applications have to create multiple CUDA contexts, one for each GPU in the SLI configuration and deal with the fact that a different GPU is used for rendering by the Direct3D or OpenGL device at every frame. The application can use the cudaD3D[9|10|11]GetDevices() for Direct3D and cudaGLGetDevices() for OpenGL set of calls to identify the CUDA device handle(s) for the device(s) that are performing the rendering in the current and next frame. Given this information the application will typically map Direct3D or OpenGL resources to the CUDA context corresponding to the CUDA device returned by cudaD3D[9|10|11]GetDevices() or cudaGLGetDevices() when the deviceList parameter is set to CU_D3D10_DEVICE_LIST_CURRENT_FRAME or cudaGLDeviceListCurrentFrame."[/i][/u]
It seems they are not so incompatible.
So my doubt is: can SLI improve the work sharing between multiple GPU or is it necessary to manage work sharing manually with GPUDirect and/or stream?
Thanks for your suggestions.