Incorrect iostat reporting when using NVMe over Fabric


We’ve been running vdbench-based benchmark for NVMe over Fabric and observed the following behavior. The dstat on the client and target machines reports correct bandwidth (aligned with vdbench report). The same is true for NVMe SMART log counters. However, the numbers reported by iostat on both the client and target machines are completely wrong. Does this ring any bells?


Test scenario: running vdbench-based benchmark - randomly placed writes of varying sizes. Using ConnectX-4 with RoCE.

Result: iostat produces results that are two orders of magnitude lower than what is reported by dstat/vdbench/device counters (e.g. 100KB/s instead of 50MB/s).

it is not clear what is your tests scenario and how far the results differ

I suggest that if your vdbench-based benchmark test is running over mellanox adapter/s, apply to preset the test in more details to get assistance on optimum & proper expected figures