Does someone try such combination? CUDA2 examples works fine on both GPUs of 9800GX2 card but running on Tesla they give 12 times slower results. Is it problem with driver?
For example: bandwidthtest with pinned memory on 9800GX2 gives aprox 5000MB/s from dev->host and vice versa and dev->dev is aprox 52GB/s
Same test on Tesla gives aprox 3500MB/s from dev->host and vice versa and only 4GB/s from dev->dev
Tests using GL like nbody and particles run fine on 9800GX2 but started on tesla
they work simulation only first two frames and after that starts from the begining
and GL window starts blinking repeating only those two frames.
I know the 9800gx2 has banwidth limit aprox 128 GB/s. So two GPUs with 52GB/s is let say 80% of that. But tesla should have 72GB/s but result of only 4 GB/s is only 5% of expected performance! You want to tell me that frame rate of 2.8 FPS in fluidsGL.exe is normal speed performance for Tesla while same test has 148 FPS on half of 9800GX2 card?
I am sure you are wrong.
At first I tried with that driver (175.16) but it wasnt recognize 9800GX2 then someone from nVidia put link (on this forum) for 174.55 driver now ready to support such combination. But it doesn’t work well
Tesla has no graphics out, so you will be bound by the time it takes to move data from tesla to your other graphics card for the display.
When running on the 9800GX2, you have no data-transfer over PCI-E, with Tesla you have.
But Tesla having 4 Gb dev->dev is indeed almost 20 times slower as expected. Does your card get enough power?
I did test with gForce 6800 in combination with Tesla with same driver. Yes, Tesla works fine with it. Dev->Dev bandwidth raises to 68GB/s. FluidsGL gives 48 FPS with gridsize 1024x1024. Huh, finally I am sure Tesla is working, but, I am not still sure the power is the problem.
This PSU is modular unit so now for testing I used other outputs (for harddrives) with cable converters to power 9800GX2 so the Tesla is only device connected to the SLI power output on PSU.
PSU is declared for 1200W in peak and 1000W of continual power.
I found other strange thing.
When open nVidia control panel, settings inside motherboard options reports an incredible (maybe problematic) PCIe slot frequency of 5000MHz and it can not be changed. I am not sure why is that and how it gets such frequency when the base clock is at minimum value (100MHz) and can not be decreased at lower value That is in combination with 9800GX2. When 9800GX2 is removed and 6800 inserted in the same slot that parameter has changed to 2500MHz (looks normal to me) . Base clock is still 100MHz.
Does it a bug of driver or 5000MHz is normal value.
Is there some nVidia tool capable to measure voltages on the GPUs or to measure current GPU consumes? It would be very helpful.
I solve the problem.
Forceware driver from Guru3D with version number 175.70 works perfectly. Tesla now gives 77GB/s and 9800 GX2 58GB/s per processor (~ 200GB/s all three GPUs).
Results are amazing … over 5GB/s dev->host and vice versa
That means nVidia has HUGE problem with their official driver 174.55