Hey !
I have a 8800 GTX card and i believe to have a memory bandwith problem.
When i run the ./bandwidthTest (from the SDK), i get:
[/CODE]./bandwidthTest
Quick Mode
Host to Device Bandwidth for Pageable memory
.
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 2457.9
Quick Mode
Device to Host Bandwidth for Pageable memory
.
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 2107.5
Quick Mode
Device to Device Bandwidth
.
Transfer Size (Bytes) Bandwidth(MB/s)
33554432 4091.7
&&&& Test PASSED
The device to device bandwidth looks to me really small …
Isn’t it supposed to be around 40000 or more ?
anybody has an idea of that ?
The bandwidth you see for pageable memory is pretty good, but your device-to-device number is way too low. I get 70726 MB/sec with our 8800 GTX. Is anything else running on your system? Heavy X usage, or other CUDA jobs?
If you used pinned memory (./bandwidthTest --memory=pinned), you’ll see the host-to-device and device-to-host numbers increase, but the device-to-device should stay the same.
Thanks for this answer !
but my device to device is really slow (as you mentioned). And i don’t have anything else heavy running … so i cannt figure out the reason of this low speed …
With this command
./bandwidthTest --memory=pinned
i have the same device2device bandwidth.
Can it come from the driver or the driver installation ?
my config is:
Dell T7400
8800 GTX
Open Suse 10.2
driver 169.09
problem solved !
i have discovered that i have a second card that can do some cuda stuff … and this card is my default device ! so changing all cut_device_init with cudaSetDevice, i now have the good bandwidth.
Thanks for solving the problem :D