I have a problem when I try to make a MultiGPU implementation with P2P strategy using 4 Tesla C2070 cards (Fermi). On the P2P approach with this configuration it is just possible communicate with cards 0<->1 and 2<->3 (anybody confirm this?) but is not possible to enable 1<->2 communication.
Does anybody help me with this problem? Is there any kind of parameter to set in order to enable cards 1 and 2 to communicate?
Thanks in advance.