I am running a Core i7 system dual boot between Windows XP 32 bit and Windows 7 64 bit and am having trouble with configuring my Tesla C2050 along with a GeForce GTX560 Ti(or ANY nVidia card). I am also running 3 displays with this configuration (so ALL video outputs have to work).
My problem is that once everything is in and running, CUDA enabled software (almost ANY software save benchmarking and monitoring) will not run on the Tesla but will instead run on the GeForce card only. If I pull the Geforce card out and replace it with an ATI Radeon, the Tesla runs just fine. Is there any way (maybe a registry tweak?) to force the CUDA software to run on the Tesla by default (or ideally on both the Tesla AND the GeForce cards simultaneously?)
The Tesla installation manual says that nVidia graphics drivers must be uninstalled before installing the Tesla, which I did. Install Tesla driver, then graphics driver. Tesla won’t run. Then I read a tech note saying that the Tesla drivers must be installed after/over the graphics drivers. Did that. All the displays work fine. Tesla won’t run. Pulled OUT the GeForce card, installed an old ATI Radeon X1950 card, Tesla runs fine. This is happening regardless of which OS I am running.
I can’t find anything on the net about this problem (I can’t be the only one?)
Help…
Brief hardware rundown:
Asus P6T SE motherboard, Core i7 980X Gulftown CPU water cooled at 3.6 GHz, 9GB ram at 1400MHz, SSD with Win XP and Win 7, GeForce GTX560 Ti, Tesla C2050, 3 Dell 2007FP monitors, 750W power supply
I’m running into the same problem (though I haven’t tried running the Tesla with an ATI card alongside it). I’ve got a C2050 under Windows 7 x64 Ultimate with the 270.61 Tesla driver. I took the GeForce out and tried running just the Tesla (with a display attached to it), and still couldn’t make any progress.
So, I ran nvidia-smi -q -i 0 and got the following output:
==============NVSMI LOG==============
Timestamp : Mon May 23 21:18:47 2011
Driver Version : 270.61
Attached GPUs : 1
GPU 0:1:0
Product Name : Tesla C2050
Display Mode : Enabled
Persistence Mode : N/A
Driver Model
Current : WDDM
Pending : WDDM
Serial Number : (redacted)
GPU UUID : (redacted)
Inforom Version
OEM Object : 1.0
ECC Object : 1.0
Power Management Object : N/A
PCI
Bus : 1
Device : 0
Domain : 0
Device Id : 6D110DE
Bus Id : 0:1:0
Fan Speed : 30 %
Memory Usage
Total : 3071 Mb
Used : 3052 Mb
Free : 18 Mb
Compute Mode : Default
Utilization
Gpu : 0 %
Memory : 5 %
Ecc Mode
Current : Disabled
Pending : Disabled
ECC Errors
Volatile
Single Bit
Device Memory : 0
Register File : 0
L1 Cache : 0
L2 Cache : 0
Total : 0
Double Bit
Device Memory : 0
Register File : 0
L1 Cache : 0
L2 Cache : 0
Total : 0
Aggregate
Single Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Total : 0
Double Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Total : 135080
Temperature
Gpu : 58 C
Power Readings
Power State : P12
Power Management : N/A
Power Draw : N/A
Power Limit : N/A
Clocks
Graphics : 50 MHz
SM : 101 MHz
Memory : 135 MHz
If you notice, there’s only 18MB of free memory. (It says 18Mb, but it’s a typo in nvidia-smi). My screen runs at 1920x1200x32, which gives about a 9MB framebuffer. If the driver uses double-buffering internally, that’d be 18MB…so for some reason, all of the memory (beyond that allocated for the display) is already allocated.
If it helps, I’m also getting “out of resources” error codes from kernel launches – but strangely, I’m not getting them when allocating buffers or copying data to those allocated buffers prior to the kernel launch (and I commented out any code which uses streams to rule out getting codes from previous commands).