Link up delay above QDR switches..

I was experienced with DDR IB switch first time. My second switch is 4036 QDR switch.

But there is a link-up delay almost 20 seconds in vSphere 5.x environments.

I want to start some VMs with auto start.

But 4036’s link-up delay cause a problem that delaied online the srp target storage. Then VMware auto start fail everytime.


If I switch to IS503x or above SwitchX IB fabric switch then can I use VMware auto start function?

Is there also link-up delay every IB switches?


I’m also tried another IB solution for auto start.

That’s a IPoIB iSCSI Target!

( I was thought also if there is a link-up delay in every switches and that’s a normal behaviour…)

That always success to boot the VM in auto start configuration.

But also has a performance problems.

vSphere 5.5 announced last end of Aug.

If vSphere 5.5 launch in this month (Sep.) then can you launch a new driver that include iSER, IPoIB uplink support immediately?

Hi! I found the root of problem.

When ESXi host reboot vSphere OFED doesn’t logout from SRP Target.

If stop the ibd daemon on ESXi host with “/opt/mellanox/bin/ stop” command then

ZFS Comstar SRP Target will ACK the vSphere SRP initiator logout from SRP Target and

ESXi VM Auto Start will work…:)

But If you have a plan to release a new vSphere OFED, this problem should be resolve…!!

I’m also tried adjust sweep time in my SM configuration.

But getting the link-up delay was exist.

There is not a link-up delay that I was change the switch to DDR switch with SM.

(I have some hybrid cables and DDR Switchs, too)

vSphere driver loading and immediatly logical link-up established on DDR switch environments.

Is there any user or admins on vSphere environments with above QDR switches?


It means logical link with ZFS SRPT.


What do you mean by linkup delay? the time that takes to the physical link to come up?

Hi! yairi…:)

SM is embedded in Voltaire 4036 QDR switch.

Major problem is your vSphere SRP Initiator.

This driver doesn’t disconnect (logout) from ZFS SRPT when ESXi host reboot everytimes.

Frankly speak logical link-up is not a problem.

Major problem is vSphere OFED 1.8.2 SRP Initiator’s behavior to reconnect to ZFS SRPT when host reboot.

Sometimes repeat this rejection and then connect to ZFS SRPT.

But this is late for ESXi host’s auto start VM function.

Cause this problem, I’m alway use dummy IPoIB iSCSI target on ZFS storage

and configure IPoIB iSCSI Initiator every ESXi host for late ZFS SRPT connection.

Do you have any solution?

logical… does that mean:

“no issues with physical link. it comes up as expected. but the ZFS SRPT connection takes time…” right?

If yes, on the very basic level - how long until the server gets LID from the SM? and how long from that event until the ZFS connection is established?

what type of SM are you using?

I have seen delays getting the links up on our systems. I have not investigated it since it is not an issue.

Have you tried adjusting any of the subnet manager configurations? I assume that you have the SM running on the switch? Perhaps there are sweep timings or caching that can be adjusted to improve the startup time.