It says the ConnectX-6 HDR 100 is IB and Ethernet. How can I configure this adapter to use Ethernet?
I am currently running a Mellanox MQM8700 (version 3.9.3124) switch - do I need an IB to Ethernet gateway or do I just need a 100GB Ethernet switch?
QM8700 is an IB switch.
What is the PSID of the card? If it is a VPI card it can be configured as ETH and connected to an ETH switch.
If the HCA is VPI model then can use mlxconfig change firmware port type.
eg,
mlxconfig -d /dev/mst/mt4119_pciconf0 set LINK_TYPE_P1=2 LINK_TYPE_P2=2
https://docs.nvidia.com/networking/display/MFTv4221LTS/Using+mlxconfig
Thanks for the response. I am very new to the Mellanox/InfiniBand world.
I should also clarify (respectfully), I know that the QM8700 is a switch, I guess I had 2 questions.
- Can I configure my MLX adapters (PSID - DEL0000000013) to use Ethernet?
- If so, can I use the switch I am currently running (MQM8700) (for Ethernet), or do I need to purchase an Ethernet gateway (or something) to work with the MQM8700?
I should also mention, that my HPC environment is working using InfiniBand and I don’t NEED to change to Ethernet, but I suspect that IB is only working when using MPI and not necessarilly with file sharing etc., and I think the environment would be so much faster if I could get that working properly (thus the questions about Ethernet).
I have 1 headnode and 15 compute nodes (Windows Server 2016), all running an Ethernet NIC and a Mellanox adapter with a 10Gb Ethernet switch and the Mellanox MQM8700 switch. The config is basically right out of the box using port splitting @ 100Gbs (for MPI functions etc.). I have 2 issues:
- We are not getting anywhere near 100Gbs (closer to 30/40Gbs), and I don’t understand why, or know how to test this (currently relying on MS HPC ping/pong tests).
- As mentioned above, I would like to utilize the InfiniBand adapters for file transfers as well as MPI activity, but I don’t know how to do that (SMB Direct using RDMA I believe?)
The Mellanox ConnectX-6 HDR 100 adapter supports both InfiniBand and Ethernet connectivity. To configure the adapter to use Ethernet, you can use the Mellanox drivers and software that are compatible with the adapter and the operating system you are running.
Once you have installed the appropriate drivers and software, you can configure the adapter to use Ethernet by configuring the appropriate network settings (such as IP address, subnet mask, gateway, etc.) for the Ethernet interface on the adapter. You can also configure any additional Ethernet-specific settings, such as VLAN tagging, jumbo frames, etc.
Regarding your question about the Mellanox MQM8700 switch, it is an InfiniBand switch and does not support Ethernet connectivity. If you want to use Ethernet with your ConnectX-6 HDR 100 adapter, you will need to connect it to a 100GbE Ethernet switch that supports the appropriate Ethernet standards (e.g., IEEE 802.3ba). Depending on your specific requirements, you may also need to consider other factors such as port density, throughput, and latency when selecting an Ethernet switch.
- What tests are you using for performance? Please use perftest package for RDMA testing (ib_write_bw etc.)
- To get max performance from the setup you might need to perform some tuning on the server itself (BIOS etc.)
- See ESPCommunity
- Storage over IB should be trivial. You can use IPoIB interfaces etc.
sounds like you are using MSFT OS. If that is the case I think they have some guides on how to configure storage over RDMA.
e.ghttps://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn583823(v=ws.11) - Your VPI device (DEL0000000013) supports IB/ETH – so you can transition to Ethernet and use the device as RDMA (RoCE) or Ethernet. You will need an Ethernet switch. But if you already have an IB switch and your concern is to enable storage – the above mentioned comments still stand – there should be no issue in setting it up (you will need to review the relevant driver user manuals – depending on the OS etc.)
Thank you (anawilliam850) - that information was very helpful.
Thank you as well (dwaxman) - again, information was very helpful.
I think I understand now, and will take a look at setting up RDMA (RoCE) for storage with IB.
very good