Has anyone used one of these with 8 GPUs? We are looking to deploy 8 tesla K20ms
I am looking for confirmation that the cuda drivers can address 8 GPUs simultaneously. I seem to recall a restriction of 4 GPUs per CPU, which is fine because there are 2 CPUs, but are these addressed directly without code modification or do they just act independently as two separate nodes, each with 4 GPUs and therefore we have to manually manage communication via e.g. MPI?
It is not clear from the brochure on this product so if there is anyone with any experience I would be happy to hear your experiences before committing to rather substantial expenditure.