I have seen other reports of no targets being detected. I have esxi 6.5 host running the new iser driver, the adapter firmware is up to date (dual port connectx-4 vpi card in eth mode).
I can ping from the mellanox card to the esxi host. When i configure a target no devices are detected.
When i run a packet capture and rescan the iser adapter i see no traffic generated and sent to the linux target server. I logged just 2 cdp packets unrelated to the adapter refresh.
I have the esxi server directly connected into the linux server, so there is no switch currently and thus not alot of other traffic on this interface, but surely i should see the esxi server attempt to discover the targets?
the only thing i noticed was that the path is listed as not used, but i believe this is because there is no target detected:
How did you bind the iscsi adapter for this to work? Is it associated to the same vmnic as the iser hba? did you need to associate a vmkernel adapter with the iscsi initiator at all? If so did you assign an ip address to the iscsi vmkernel? I just dont want to give it a true iscsi path to the target if i do not have to.
i do not have a switch, they are working back to back though. there are also a few examples on mellanox’s site of direct connect servers as part of their demo. You can configure global pause and pfc in firmware, outside of OS. not sure why this would not work back to back if both adapters are sending pfc/global pause info?
once i enabled iscsi software adapter i was able to connect to my scst target. It is working now. Not really performing any better though. I was able to get better performance in iscsi by configuring 4 vmkernel adapters to my 1 vnic and then configuring round robin policy to 1 iops. It was the same adapter but it seemed to trick esxi into dedicating more hardware resources/scheduler to the adapter. At the moment the iser connection is on par with iscsi before i did this round robin policy. It doesnt look like i can configure round robin policy for iser adapters, but i am still looking into that.
vmknic must bind iSER initiator, not VMware iSCSI initiator.
IP address also set to vmknic,too.
This test was completed with StarWind vSAN iSER target and RoCE v1 with Global Pause (access port in Arista switch)
This iSER driver 1.0.0.1 for Global Pause(RoCE v1).
Mellanox said this stuff for ESXi 6.5, but QS shows a old ESXi 6.0 C# client picture and ethernet switch must set every port up to Global Pause mode.
This manual is very useless…:(
PFC on ESXi need some configuration with pfcrx, pfctx. But setup the default priority 3 to pfcrx, pfctx to 0x08, you will meet a Host Profile creation error message that general system error. Mellanox never provide a fix since driver 1.8.3.0
Here is a result from a vm, it has 2 cpus, 8gb and running 2016. It has a vmxnet3 adapter and vmware paravirtual iscsi hard drive.
These results are slightly better than what i was able to achieve with iscsi, so there is some improvement. I understand a switch is recommended but it clearly does work without one. I am monitoring for any packet loss/data issues but as explained before its point to point. The target is 3 nvme pciessds in what amounts to a raid 0 in a zfs pool/zvol btw.
Understood, i am testing the interconnect though, where its coming from is a bit irrelevant. Just saying that iser is performing somewhat better than iscsi so it is worth going through this hassle for those sitting on the sidelines wondering =). I was never able to get above 8800 or so on iscsi. The non sequential are very close, not a huge difference.