Anyone using ESXi 6.5?

I am currently running connectx-3 cards with 1.8.2.4 drivers back to a srp target with esxi 6.0. I know Mellanox has all but given up on SRP and VMWare. I also have connectx-4 cards available.

Currently my connectx-3 cards are providing connectivity to my datastore via a scst/srp. What is the fastest option available for Connectx-4 cards? What does the inbox 6.5 driver support? At this point it looks like the answer is ISCSI, but curious what others have tried.

Hi!

Mellanox vSphere OFED 2.4.0 namespace conflicts with vSphere 6.5 native system driver nrdma, vrdma for Ethernet RoCE.

Mellanox can’t support vSphere 6.5 now…:(

I’ll switch to vSphere 6.5 in-box driver with FreeNAS then work properly.

Yes! You are correct!

If Host OS is Windows, don’t need Block Storage like SAN anymore to build a cluster or etc.

But my environment isn’t Windows platform.

That is why we don’t use SMB Direct.

Windows SMB Direct with RDMA shows a powerful performance.

But SMB Direct is not a block storage service & Windows only between host and storage.

I think System Center and Hyper-V also good solution, but not for SMB environment.

have you tried SMB/RDMA on windows 2016 using CX-3 with newest firmware we can get 4500Mb/s on SMB for window Hyper-V, with SMB Multipath we see bandwidth peeking @6.5GB/s and latency very low. (test backend of dual samsung 960 pro nvme cards x 2 (dual nvme) on PCIE>NVME adapter in raid 0 @7GB/s/Read and 4.2GB/s Write

were now running 40Gb Ethernet with Arista network to HP C7000- will upgrade Felx10’s to Felx20’s when price drops below 4K with V-SAN and getting 35-40Mb/s on that also. But clients are opting for windows cloud instead… Vcloud kinda sucks.

With Windows Storage Spaces, + tiered SSD/SAS drives we can build a clustered storage array better than VSAN for next to nothing, no extra vmware vsan licenses. no need for some custom SRP mucking about with CEPH or Luster to make a storage cluster… SMG+WSS+RDMA is better in my opinion. SRP is dead… were giving up on it… when you install Win2016, it has drivers for CX2-CX3, CX4 + all work together with no problems, ca see RDMA installed and SMB working out of the BOX… no more screwing around with VMware to install drivers… Were getting ready to dump Vmware also. no upgrade to 6.5 cause there is little new stuff in it that is worth the price frankly. Vmware have lost the plot.

No its not Block I/O thats the POINT! you dont need block i/o any more to run hyper-v workloads. NO MORE $150,000 SANS

no more manual mucking about with nasty vmware incompatabilibties. VSAN is not supported on Infiniband networks, so its not a production solution anyway, and VSAN on 10Gb/s isnt much better than iscsi… Now i see why Mellanox are going all out on ethernet. Microsoft is a much bigger market space than vmware, and providing a single paine of glass for hyper-v and apps is a more logical/cost effective solution

http://www.mellanox.com/related-docs/applications/Achieving-1.1M-IOPS-over-SMB-Direct.pdf http://www.mellanox.com/related-docs/applications/Achieving-1.1M-IOPS-over-SMB-Direct.pdf

The workloads running on Hyper-V VMs can achieve over 1 million IOPS throughput and 100Gbps aggregate bandwidth. Over WStorageSpaces - Clustered SMB Shares over RDMA, you can run Hyper-V clusters on WSS

Who needs to waste time building SRP SCST clusters with Ceph (RDB) which performacne sucks

or DRBD which also sucks and is tediously complex, or some other tedious Luster variant where you then have to implement some cluster FS and port it over SCST as block IO

you can kick out iSER, iSCSI, SRP all at once, build a storage and hyper-v cluster in 1-2 days, and be in prod in no time.

where as building a CX-3 SRP storage solution for esx 6 is a pain, and what drivers are ready for ESX6.5? for SRP or iSER… they dont exist. and prob never will. X…DOA