I have a two server compute cluster with each server have 2 CPU’s. I have installed two CONNECTX-7 400gb infiniband cards in each server (total of four cards). in each server there is one card in a PCIE slot that is close to each CPU. I have made two direct connections between the servers. I have two subnet managers running for each of the direct attached IB cards. When I run MPI to discover the fabrics available, only a single MLX fabric is identified. Does the MLX fabric abstract the two separate network links from MPI? I would have thought that I would need to pin specific MPI processes to the CPU that was closest to a particular NIC.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| MPI only using 1 port on dual port IB NIC | 2 | 1255 | October 25, 2022 | |
| Dual infiniband cards in a NUMA node | 2 | 1289 | September 12, 2023 | |
| ConnectX-5 switchless operation of multiple nodes | 2 | 216 | October 1, 2024 | |
| Trunks, pvlans, infiniband world | 9 | 773 | May 3, 2016 | |
| Using ConnectX-2 VPI adapters for network workstation with 2 nodes. | 18 | 641 | July 22, 2013 | |
| connect s1070 with InfiniBand | 1 | 5173 | January 22, 2010 | |
| mpich and infiniband | 1 | 11947 | March 12, 2007 | |
| MCX654105A-HCAT on Linux shows an ib0 and ib1 and appears as two HCA's, mlx5_0 and mlx5_1, which do I use? | 2 | 1038 | December 22, 2023 | |
| Direct connection of two InfiniBand ports without IB switch | 1 | 2029 | February 3, 2023 | |
| Issues with ConnnectX-6 Throughput Under Infiniband | 7 | 1285 | November 16, 2023 |