For computation, is it possible to daisy chain multiple GPUs using say SLI. If this is done, do they act as one GPU with shared memory??
To use several GPU for computations SLI must be disabled in drivers.
You’ll see each GPU as separate device, so it’s your responsibility to select correct device with cudaSetDevice().
Also you need to spawn as many threads as GPU devices (i.e. you can’t work with more than one GPU from the same CPU thread).