Is it possible to use a QSFP port to connect four 10Ge (or perhaps 25Ge) Ethernet cables. I’d need to reconfigure the ConnectX 7 card to configure these as separate Ethernet adapters for each transceiver lane. Some of the earlier Mellanox cards were cable using mlxconfig tools, but I’m unsure if the ConnectX 7 cards can do this - particularly the Spark variant. I’ve read the specs and user manuals and while it supports those standards, it’s unclear if the is 1 adapter or 4 adapters (one per QSFP lane). For me this would save a network switch to convert 4x25Ge FPGA adapters to a single 100Ge connection.
This is what’s being shown by devlink port show:
elsaco@spark:~$ sudo devlink port show
auxiliary/mlx5_core.eth.0/65535: type eth netdev enp1s0f0np0 flavour physical port 0 splittable false
auxiliary/mlx5_core.eth.1/131071: type eth netdev enp1s0f1np1 flavour physical port 1 splittable false
auxiliary/mlx5_core.eth.2/196607: type eth netdev enP2p1s0f0np0 flavour physical port 0 splittable false
auxiliary/mlx5_core.eth.3/262143: type eth netdev enP2p1s0f1np1 flavour physical port 1 splittable false
splittable false so port splitting might not be possible. Didn’t try it, thought!
Hi, this is untested on DGX Spark and would require using an untested splitter cable that could cause other issues
If you are familiar with the process, these settings are modified using mlxconfig.
Namely, the related parameters are NUM_OF_PF and MODULE_SPLIT_M<x> .
I have never tried this before on Spark, and one of the reasons is that we never validated a split-type cable for it. Each cable has a unique power/thermal profile, and currently only two cables are validated for Spark, both being 200G QSP112 DAC, so it’s hard to assess how a split cable will perform, it may cause overheating or other unexpected problems.