Hi,
We have multiple pairs of HPe SN2410M switches (they’re just Mellanox switches painted a different colour, hence why I 'm here!). We have them configured as MLAG pairs with a VIP, and they work well. We’re upgrading them to Onyx v3.9.1306 at the moment.
We also have some Nexus 9K switches in exactly the same configuration - MLAG pairs (or vPC as Cisco call it). In all cases all the L3 routing is done by core switches upstream. These switches are all top-of-rack for servers, so everything is MLAGs down to the hosts. The HPe switches have mgmt0 connected back to back for the VIP, and the IPL goes over a port-channel of two 100G DACs.
However, one major difference I notice between the HPe and the Nexus is that the switch-to-switch connectivity is very different. The Nexus will put traffic through the connection between the switches, so there’s East-West communication. The HPes don’t - the documentation clearly says the IPL will only transfer traffic in a failure scenario (i.e. the uplinks die).
All I’m getting at is we have two 100G connections doing relatively nothing - I could’ve put in 10G and saved a whole load of money!
Can we/should we do anything to enable the 100G DACs to be useful for traffic? Or is this just how it is with these switches? I’ve considered putting in a couple of 10G connections and moving the IPL over to them so I can then use the 100G DACs for a L2 MLAG, but then there’s all the fun of Spanning Tree loops to consider (something the Nexus figures out for itself and Just Works without any major fiddling).
What’s considered the right thing to do here? We’re running Nutanix Hyperconvergence if that makes any difference, but there’s other servers also attached.
Thanks in advance.