Direct connect NVMeOF setup

Is it possible/advisable to connect a server with a RAID 5 NVMe array and a MCX515A-CCAT installed directly to four compute servers each with a MCX511F-ACAT installed using a MCP7F00-A002R30N cable? Trying to avoid purchasing an expensive 100Gbe switch when we don’t really need one.

Are these the instructions I would follow to do so:

https://community.mellanox.com/s/article/howto-configure-nvme-over-fabrics

Or are there newer instructions now that mainline Linux kernels have built-in support?

Thechicaly, direct connection is possible, but practically, I would strongly suggest to use a 100G switch in-between the RAID-subsystem server & the rest of compute servers, and not direct-connection topology. In particularly, when it comes to Storage initiator-target efficient communication

There are many reasons to use a switch in-between, but I’ll focus on 3 major reasons:

  1. If & when decided to use FC (flow-control) or PFC mode with a lossless & fluent trafic, then the switch will provide a robust W/R non-disruptive communication

  2. Better routing capabilities

  3. full sync & compatibilities with Mellanox server divers & Mellanox adapter’s fw, in term of RDMA storage-protocols, such as NVMEoF, iSER etc

In close, it is worthwhile investing in a Mellanox 100G switch but then you have a harmonic & fluent system workability