I’ve been measuring the peak memory bandwidth I can obtain in some linear algebra kernels I’ve written. I want to make sure I am calculating my percentage utilization correctly (or really check that Nvidia is reporting their bandwidth correctly). The most I seem to be able to get out of a GTX 480 is 150 GiB/s = 161 GB/s. Nvidia reports the peak memory bandwidth of the 480 as 177.4 GB/s. Since they state GB/s and not GiB/s, I presume that they are counting memory bandwidth with base-10 (Gigabyte = 10^9 bytes) not base-2 (Gibabyte = 2^30 bytes)?

If Nvidia use base-10, then I can achieve about 90% of peak bandwidth, but if they’re using base-2, then my utilization is 85%. So which is the correct measure?