Hi!
I’m using bunch of MC2207130-002 FDR14 2m copper cables.
When using on Linux environment like my storage, that link-up 56GbE successfully.
But on ESXi 6.5 update 1 inbox driver, that link-down 56GbE.
Therefore I’m just using 56Gb FDR14 IB mode with Latest firmware 2.42.5000
BR,
Jae-Hoon Choi
PS.
I’ll update with same ethernet test with your commands & ESXi 6.5 inbox driver.
I’ll also test your firmwre 2.42.5000 & my custom firmware 2.36.5150 that enabled 56GbE,too.
Update 01.
Here is a result.
* Switch port mode to 56Gb Ethernet mode with MTU 9000
* ESXi 6.5 update 1 pNIC list on ESXi console
* ESXi 6.5 update 1 56Gb Ethernet Link-up status with inbox ConnectX-3 driver
** ESXi 6.5 update 1 56Gb IPoIB Link-up status with 1.8.2.5 SRP Driver
I don’t know why at this time all works perfectly.
I’m just clean install ESXi 6.5 update 1 again on POC Fujitsu RX200 S7 for POC server.
Q01.
May I have a conclusion that latest firmware can support 56Gb Ethernet link-up with Mellanox Ethernet switch & cables?
Q02.
In 56GbE mode, I’m test iPerf test between physical ESXi host then I met a almost 6.11Gb/s performance only.
* 56GbE iPerf client - physical ESXi 6.5 update 1 host 01 with MTU 9000
* 56GbE iPerf server - physical ESXi 6.5 update 1 host 02 with MTU 9000
All tests on 10, 40, 56GbE port same…
Could you give me a another check point?
This question also request in this threads.
ESXi 6.5U1 40,56Gb Ethernet Performance problems Infrastructure & Networking - NVIDIA Developer Forums
I’ll check performance issue in this threads.
BR,
Jae-Hoon Choi