Well, I am not really sure about allot of things.
This is what I have;
3 Servers with ConnectX3 VPI PRO adapters 56gbe dual port
3 Servers with ConnectX3 EN adapters 56gbe dual port
2x sx1036 with following licenses: 56gbe + L3
I want maximum performance(Load Balancing?), and redundancy for my network. And I want to keep things as simple as possible by only using these switches for all of my networking.
Each server is connected with one link to each switch. I guess I should also link the switches to eachother right? For loadbalancing and routing etc.?
As of now I have one subnet, but this could expand in the future, nothing crazy though,
Im still fighting with different ISP’s for the best deal and hardware, but most of them ship a FTU/Modem with 1x 1gb RJ45 port. (Im starting at 500/500 bandwith) This gets me to the following questions;
1: What would be the most effective way to connect my SX1036’s to the RJ45(1g) port while minimizing added latency?
2: How much of a difference would a direct SFP+ connection make? Is this something worth fighting for?
3: Would it be possible to pass the whole modem/FTU and put each of the pair of(1 up 1 down) fibers in a SFP+ connector, and connect both these SFP+ connectors to the splitout cable to one of the switches? Or would this pose a problem with the SX1036 because of the wan link consisting of 2 ports? (one up one down). Is it even possible to do this kind of thing with the SX1036 and SX6036 series? Or do you need special tech for this kind of thing?
So, I absolutely plan to use both switches as L3 router, while also using them in L2, and actually as my all-in-one networking solution:) Which ofcourse, still needs a firewall;
Mellanox says the SX1036 is capable of SDN, but this is very restricted it seems? Are there any firewalls that can run on these boys? What kind of SDN DOES work on these switches?
If firewall on the switches is not possible, a VM serving as a firewall on each host would be the next best thing right? No, I think dedicated would prob. be easier, but buying 2 ded. switches adds allot of costs, and also ads latency, so because of that I think using VM’s with SR-IOV instead. Would a linux based firewall runing inside a VM using SR-IOV and proper QoS tuning be faster than a dedicated FW? If not, How much latency will this add?
After my internet is connected to my host, and behind a firewall, I need to get it to the running vm’s, what would be the lowest-latency way of doing this? I’m sure leveraging the adapters capabilities can help allot here? I was thinking SR-IOV ? Or would this not work with a FW VM, since everyone connects to the connector “directly” using sriov?
Any thoughts on how to solve the FW problem? Do I even need a dedicated firewall? Cant the switches and adapters do most of the FW work? Combined with NVGRE/VXLAN tunneling?
I would really appreciate any advice on both matters.