ib_write_bw and the report_gbits calculation?

Hi all,

The default units for ib_write_bw is Megabytes/sec. With the “–report_gbits” flag, it reports in Gigabits/sec. I ran ib_write_bw many times and get averages of about ~3323.50 MB/sec, then ran it again and get ~27.88 Gbits/sec.

The math to convert between the two seems odd to me. I had expected:

3323.50 MB/sec * ( 8 bits / byte) * ( 1 GB / 1024 MB) = 25.96 Gb/sec

Looking at the sourcecode (perftest_parameters.c), I see this line:

format_factor = (user_param->report_fmt == MBS) ? 0x100000 : 125000000;

It looks like if I want to match how ib_write_bw reports the throughput, I would have to calculate Gb/sec more like this:

3323.50 MB/sec * (1048576 bytes / MB) * (8 bits / Byte) * (1 Gbit / 1,000,000,000 bits) = 27.88 Gb/sec

It seems that a megabyte is considered 1,048,576 bytes (base 2), but a gigabyte is considered 1,000,000,000 bytes (base 10)? This changes some throughput comparisons I have made recently, where some other types of tests we have run have reported MB/sec and I have converted to Gb/sec.

When we write our own tests that calculate numbers in MB/sec or Mb/sec, is the correct conversion to G/sec the one that ib_write_bw performs?

Thanks.

That’s interesting. I feel that it’s a problem of the chaos of concepts and standards.

At the beginning, ‘K’ usually means 1000(power of 10) and ‘Ki’ means 1024(power of 2).

but Wiki Data-rate units - Wikipedia records that in history the letter K is often used as a non-standard abbreviation for 1,024, especially in “KB” to mean KiB.

And Wiki lists the device bit rates, where Networking unit usually takes *bit/s and Computer data interface unit takes *B/s.

Here when you ask for bandwidth in ‘MB’, it may intend to compute the result in Computer data interface unit.

And for ‘Gb’, it follows the Networking standard.

However, it’s just my guess.

Sorry for my misunderstanding your question.

I recheck the source code and get some thought.

Computer Networks tells that the unit of storage is different from the unit of transmission. They are defined by different standards.

For storage, the ratio is 1024, so 1MB = 1024 KB = 1024 * 1024 B = 1024 * 1024 * 8 b

For transmission, the ratio is 1000, so 1Gbps = 1000 Mbps = 1000 * 1000 Kbps = 1000 * 1000 * 1000 bps = 125 000 000 Bps.

PS: wiki explanation for this unit Data-rate units - Wikipedia

Hi.

Consider this encoding rule:

That’s good info to help figure out the effective data rate, thanks. My question, though, is why ib_send_bw / ib_write_bw considers a megabyte to be 1,048,576 bytes and a gigabyte to be 1,000,000,000 bytes. And, secondarily, is this a standard approach across the industry.

But thank for you that. I’ll have to consider the effective data rate when I compare throughputs.

Hi haonan,

Thank you for helping me out with this. I think that is the correct answer to my specific question on how many bytes there are in 1 Gb when used for throughput (Gb/sec), and you gave a Wikipedia backup to the calculation. I still don’t understand why ib_write_bw uses different approaches between (megabytes/sec – use a power of 2) as it does for (gigabits/sec use a power of 10). Thanks!