Frequency Vs Memory Bandwidth

Hello,

I want to know the difference between the two parameters related to DRAM in graphics cards. The parameters are memory bandwidth in GB/s and frequency in MHz. Especially I am confused with this frequency…is this the operating frequncy of the DRAM??
I would thankful to you for clearing this doubt. By the way what is the typical value of this frequency (in MHz) for DRAMs that comes with NVIDIA graphics cards.

Thanks

I will write this for 8400gs from this link: Link

Regarding memory, there are 4 fields there:

Memory Clock (MHz) 400

Memory Amount 256MB

Memory Interface 64-bit

Memory Bandwidth (GB/sec) 6.4

You can calculate Memory Bandwidth from Clock and Interface: (400Hz x 10^6 x (64/8) x 2) / 10^9 = 6.4 GB/sec

Where 400*10^6 is Memory Clock, 64-bit is Memory Interface divided by 8 to get bytes and multiplied by 2 due to the double data rate. [Best practices guide 2.3 - section 2.2.1]

I am not exactly sure why are there soo many clocks on that page, however I am pretty sure that Memory clock is the clock with which data are being fetched from DRAM.

Hopefully this helps you a bit

Lightenix

PS: if anyone can explain what exactly are Shader clock and Core clock please do:) (currently I think Shader clock is (processor clock inside of a MP) multiplied by some factor with Core clock maybe?)

To estimate device performance, you only need to pay attention to the shader clock and the memory clock:

FLOPS = [shader clock] * [# of stream processors] * 3
Device Memory Bandwidth = [memory clock] * [bus width in bits] / 8 * 2

The factor 3 in the FLOPS calculation and the factor of 2 in the bandwidth calculation are specific to current CUDA implementations. Future cards might have different capabilities, and those factors will change.

The effects of the core clock are unobservable. (Well, perhaps if you deliberately screw with the factory setting, you could see some effect, but you shouldn’t do that. :) )