Incorrect iostat reporting when using NVMe over Fabric

Hi,

We’ve been running vdbench-based benchmark for NVMe over Fabric and observed the following behavior. The dstat on the client and target machines reports correct bandwidth (aligned with vdbench report). The same is true for NVMe SMART log counters. However, the numbers reported by iostat on both the client and target machines are completely wrong. Does this ring any bells?

Thanks

Test scenario: running vdbench-based benchmark - randomly placed writes of varying sizes. Using ConnectX-4 with RoCE.

Result: iostat produces results that are two orders of magnitude lower than what is reported by dstat/vdbench/device counters (e.g. 100KB/s instead of 50MB/s).

it is not clear what is your tests scenario and how far the results differ

I suggest that if your vdbench-based benchmark test is running over mellanox adapter/s, apply to support@mellanox.com mailto:support@mellanox.com preset the test in more details to get assistance on optimum & proper expected figures