The GPU uses double data rate memory, which transfers two bits every clock, thus the factor of 2.
The memory clock is 900 megahertz, i.e. 900e6 cycles per second, not 900. From there you get to the formula Nvidia gave if you conveniently defined gigabytes as 1e9 bytes (as the harddisk manufacturers do as well) to make the numbers look a bit bigger.
It is common practice to state bandwidth numbers using the ordinary meaning of the prefixes “mega” = 1e6 and “giga” = 1e9, thus 1 GB/sec means 1e9 bytes / second. The well-known STREAM benchmark for measuring memory bandwidth does this as well.