Bandwidth problem ? Could anyone verify that this is normal?

I executed the bandwidth program in CUDA SDK.
Does the following numbers look right to you ? or they are very small ?
If I should upgrade the computer, then could you recommend model for the motherboard ?
I am using 8800GTX


Quick Mode
Host to Device Bandwidth for Pageable memory
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 1652.5

Quick Mode
Device to Host Bandwidth for Pageable memory
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 1331.5

Quick Mode
Device to Device Bandwidth
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 63048.0

&&&& Test PASSED

i have an 8600GT, and the first two are about the same. For the third number, I have 14800, so you have ~4x the device to device bandwidth I have.

Try running the bandwidth program with a --memory=pinned command, this should give you results closer to max PCI-E efficiency (usually around 2-2.5 GB/s). Also, take a look at this thread
and compare your results to others. Yours look fine to me.

Is there anyway to increase those 1G/s bandwidth that I have ?
Or it is the current maximum speed … ? Will buying better PCI card help ?

The PCI-e transfer speed depends on the MB, BIOS and host CPU.

The best transfer rate is with PCI-e gen2 MB and PCI-e gen2 cards (G92 based).

The notion “Host to Device Bandwidth for Pageable memory” means you’ve ran the test with non-pinned memory (this is the default setting). To get best bandwidth results (and get close to the physical limit of your PCI-E slot), you need to run “bandwidth.exe --memory=pinned”. You will only have 2-2.5gb/s transfer if you use pinned memory.

Thanks for all your replies :)

I used the pinned memory method, but it slows down the overall execution time by factor of 10 !

I am not sure, but maybe I am passing too much data into pinned allocated space…? which then slows down the original program… ?

Those numbers seem right i have the same thing

except your getting 5Gb/s more then me :S