Ceph with OVS Offload

Hi,

I’m looking information about running Ceph with OVS offload. I know that it is not recommended to run Ceph with OVS or even Linux Bridge. But, it is on Kernel OVS case. What about if I use Ceph but with OVS offload supported by ConnectX-4 or ConnectX-5? With OVS, I can have better traffic balancing of VXLAN comparing to ports bonding which require MLAG (which will consume extra ports on the switches for IPL). Any performance comparison will be helpful.

Best regards,

Any experts about this?

Hi Lazuardi,

I am not sure if you found a solution for your deployment however, have you read about ASAP2 solution.

ASAP2 is GA as part of MLNX_OFED 4.4 and has a separate page with more details:

End-to-End High-Speed Ethernet Connectivity | NVIDIA End-to-End High-Speed Ethernet Connectivity | NVIDIA

End-to-End High-Speed Ethernet Connectivity | NVIDIA End-to-End High-Speed Ethernet Connectivity | NVIDIA

Getting started with Mellanox ASAP^2 https://community.mellanox.com/s/article/getting-started-with-mellanox-asap-2

Sophie.

Hi Sophie,

I have read all about ASAP2 on Mellanox website. My question is about the performance of running Ceph with ASAP2 OVS offload and VXLAN offload.

Best regards,

Hi Lazuardi,

Ceph has not been tested against the ASAP2 OVS offload solution.

Sophie.

Hi Sophie,

How can I request that test to Mellanox as reference? I’m looking for reference design of link redundancy for Ceph but without MLAG on switch and maximazing offload features of ConnectX-5 EN.

Best regards,