ESXi 6.5 update 1 inbox driver - 56GbE speed support with CX-3 & SX6036G


I tried ESXi 6.5 update 1 inbox driver 56GbE link between CX-3 (not Pro) and SX6036G Ethernet port.

But ESXi host shows me a link down, only link up with SX6036G 40GbE port mode.

Here is a link about it.

HowTo Setup 56GbE Back-to-Back on two servers

I can download latest CX-3 firmware bin format, but mlx format can’t.

How can I resole this issue?


Jae-Hoon Choi

Hi Jaehoon,

Can you provide us the part-number of the Mellanox cablin this setup?

Thanks and regards,

~Mellanox Technical Support

I’m test -P 8 options, but it shows me a only sum of slightly under 10Gb performance.

I’ll test with old vkernel based ( from Dell) ethernet driver.

I think there is a optimization problems in inbox native driver.


Jae-Hoon Choi


I’ll post another threads, because 56Gb Ethernet link-up problem is gone…

Mellanox release 56GbE disabled firmware.

When firmware 2.36.5150 released, Mellanox also release raw file format like *.mlx.

Originally 56GbE need a seperate license, but Mellanox announce that’s free.

You can find what different in configuration in this threads.

If 40,56GbE shows a reasonable performance, I’ll drop using RDMA ULP in vSphere.


Jae-Hoon Chou

You csn find firmware for ConnectX-3 below.

And my SX6036G current firmware needs a CX-3 firmware 2.40.7000 or above.


Jae-Hoon Choi

Why don’t you just flash the older firmware, e.g. 2.40.7000 using mlxfwmanager?


I’m using bunch of MC2207130-002 FDR14 2m copper cables.

When using on Linux environment like my storage, that link-up 56GbE successfully.

But on ESXi 6.5 update 1 inbox driver, that link-down 56GbE.

Therefore I’m just using 56Gb FDR14 IB mode with Latest firmware 2.42.5000


Jae-Hoon Choi


I’ll update with same ethernet test with your commands & ESXi 6.5 inbox driver.

I’ll also test your firmwre 2.42.5000 & my custom firmware 2.36.5150 that enabled 56GbE,too.

Update 01.

Here is a result.

* Switch port mode to 56Gb Ethernet mode with MTU 9000

* ESXi 6.5 update 1 pNIC list on ESXi console

* ESXi 6.5 update 1 56Gb Ethernet Link-up status with inbox ConnectX-3 driver

** ESXi 6.5 update 1 56Gb IPoIB Link-up status with SRP Driver

I don’t know why at this time all works perfectly.

I’m just clean install ESXi 6.5 update 1 again on POC Fujitsu RX200 S7 for POC server.


May I have a conclusion that latest firmware can support 56Gb Ethernet link-up with Mellanox Ethernet switch & cables?


In 56GbE mode, I’m test iPerf test between physical ESXi host then I met a almost 6.11Gb/s performance only.

* 56GbE iPerf client - physical ESXi 6.5 update 1 host 01 with MTU 9000

* 56GbE iPerf server - physical ESXi 6.5 update 1 host 02 with MTU 9000

All tests on 10, 40, 56GbE port same…

Could you give me a another check point?

This question also request in this threads.

ESXi 6.5U1 40,56Gb Ethernet Performance problems

I’ll check performance issue in this threads.


Jae-Hoon Choi

Hi Jae-Hoon,

Thank you for posting your question on the Mellanox Community.

For reaching 56GbE speed you also need to make sure that the cable you are using is capable of 56GbE. For example, use use MC2207130-001 ( )

This cable is a VPI cable which supports Ethernet and InfiniBand connections.

On the SX6036, 56GbE is enabled by default in MLNX_OS 3.4.3002 or above.

When the correct cable is used, the switch will auto-negotiate 56GbE to the NIC when the port is set to 56GbE

Thanks and regards,

~Mellanox Technical Support

Yes! I’m using correct Mellanox cables, HCA and switches.

But your latest CX-3 firmware 2.42.5000 is disabled thst 56GbE capability.

ESXi 6.5 update 1 inbox driver support 40GbE, but when I change port mode to 56GbE then link-down this port.

I tried with your firmware 2.36.5150 that enabled 56GbE ESXi 6.5 update 1 host can link-up 56GbE port.

When this firmware released, you also release raw image file and I can achieve 56GbE via modify configuration.ini.

When your latest firmware 2.42.5000 released, your configuraion don’t support 56GbE preperly.

When I’m extract configuration file via flint.exe with dc option, unbelievable result like below.

portx_802_3ap_56kr4_enable = true option was gone away…

This option caused ESXi 6.5 update 1 inbox driver can’t link-up 56GbE port mode.

Do you want to kill the 56Gb Ethernet capability intentionally on ConnectX-3 for new generation ConnectX-4 or above?

Where can I find raw image file version 2.42.5000?


Jae-Hoon Choi

I always use mlxfwmanager on Linux in order to update my CX3 cards firmware. The latest firmware it shows for me is version 2.40.7000. You can always flash the latest image by running:

mlxfwmanager --online -u -d

where the address can be found by running:

mst start

mst id add

mst status

You can flash any card from your Linux box this way.

Now, with my firmware, I just switched one of the ports to EN mode and did the same on the switch. Indeed, by default it connected using 40Gbps speed. However, I was able to change the speed on the switch to 56Gbps and it worked. Maybe you should try the same…

Absolutely you are correct.

But I say the configuration in firmware that disabled 56Gb ethernet, not a firmware update method.

Did you know mlxburn that using custom firmware?


Jae-Hoon Choi


Latest CX-3 firmware verions is 2.42.5000…

There is some fixed lists on latest firmware needed…:)

What kind of fixes are so necessary? Besides, I can’t see version 2.42.5000 anywhere available for download. How did you get it installed in the first place?

Got it. So you can go back to 2.40.7000, which should be working fine… Why won’t you give it a try?

Hi Jaehoon Choi,

I did a quick test in our lab.

Even with the latest firmware I am getting 56GbE link up. Switch used in our lab is also a SX6036. See screen-dump below:

Please share the part-number of the Mellanox cable used. Or even better please make a screen shot of the transceiver information from the switch port on which the node is connected to. See our switch screen shot below.

Thanks and regards,

~Mellanox Technical Support

Hi Jaehoon,

Yes, the latest firmware supports 56GbE as end-to-end is Mellanox (NIC, Cable, Switch)

When using iperf, try to use the option -P to run more parallels.

Thanks and regards,

~Mellanox Technical Support