# Memory Speed Calculation

Hey,

i’d like to calculate the maximum memory speed which my card can achieve.

Assume i have the G80 with a memory bandwidth of 384bit and a memory clock of 900MHz.

To get the GB/s i’d simply

``````384*900/(8*1024*1024*1024)
``````

. 8 for bit to byte, 1024 for byte to megabyte and so on, but the number seems ridiculously small.

In a the CUDA parallel reduction whitepaper they made a sample calculation with the same graphics card and came up with 86.4 GB/s. Calculated with

``````384 * 1800/8
``````

.

Could someone explain me, how this makes any sense?? The unit of this is byte/s. And why multiply the memory clock by 2?

Thanks and best regards, tdhd

The GPU uses double data rate memory, which transfers two bits every clock, thus the factor of 2.

The memory clock is 900 megahertz, i.e. 900e6 cycles per second, not 900. From there you get to the formula Nvidia gave if you conveniently defined gigabytes as 1e9 bytes (as the harddisk manufacturers do as well) to make the numbers look a bit bigger.

It is common practice to state bandwidth numbers using the ordinary meaning of the prefixes “mega” = 1e6 and “giga” = 1e9, thus 1 GB/sec means 1e9 bytes / second. The well-known STREAM benchmark for measuring memory bandwidth does this as well.

Thanks for your replies, i really didn’t take into account the Mega in front of the Hz :).