GTX 295, CUDA thinks I have only 30 multiprocessors

I have a single GTX 295 card installed on to a Rampage II extreme board, 12 GB RAM, i7 965 CPU, and I am running Vista x64.

When I compile & run the ‘deviceQuery’ project included with the CUDA SDK samples, I get the following output:

[i][indent]There is 1 device supporting CUDA

Device 0: “GeForce GTX 295”
Major revision number: 1
Minor revision number: 3
Total amount of global memory: 939524096 bytes
Number of multiprocessors: 30
Number of cores: 240
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 16384 bytes
Total number of registers available per block: 16384
Warp size: 32
Maximum number of threads per block: 512
Maximum sizes of each dimension of a block: 512 x 512 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 1
Maximum memory pitch: 262144 bytes
Texture alignment: 256 bytes
Clock rate: 1.24 GHz
Concurrent copy and execution: No

Test PASSED

Press ENTER to exit…
[/indent][/i]

I was expecting that CUDA would ‘see’ 2 cards with 30 MPs each, or possible 1 card with 60 MPs (either way, a total of 480 cores).

This would suggest that the CUDA code I write cannot take full advantage of the GTX 295.

How can I ‘see’ the full resources of the GTX 295?

Check the release notes, there is a known issue on vista with multiple GPUs.

IIRC, you need to 1) go into the PhysX control panel and enable PhysX on the 2nd card. This enables CUDA under the hood. Or 2) if you have a 2nd display attached to the 2nd card it should also show up. Also make sure you are running CUDA 2.1.

Make sure that SLI is disabled.

Just to be clear, once you fix this problem, you will see two 2 cards with 30 MPs each (not 1 card with 60 MPs).

I’ve just faced exactly the same problem with GTX 295 being seen by Vista as one device. This can be solved by changing multi-gpu settings in NVIDIA Control Panel. Please check attached screenshot for settings which make Vista see GTX 295 as two devices.

Thanks, AndreiB. I tried the same thing but it did not work. It was not until I plugged in a second monitor and turned off SLI that I could get it to work correctly. But I did get it to work correctly.

I have the same problem under Windows 7 x64 with my GTX 295 and have not been able to solve it. I’ve tried different driver version (both Vista and Win7), but without luck. I have tried all combinations of “Do not use multi-GPU mode” with “Set PhysX GPU acceleration” with no change in result as far as CUDA goes. I’ve confirmed with GPU-Z that by turning off the multi-GPU indeed turns off SLI.

To add insult to injury the second GPU gets completely ignored unless I turn on SLI. I have two monitors connected to the system and both insist on running on the same GPU as CUDA uses.

Any suggestions on how to resolve this would be greatly appreciated.

I’ve managed to solve the problem, but in a fairly unsatisfactory way. It was actually mentioned in the thread above although I did not interpret it correctly at the time. I had to plug in a second monitor, on the HDMI port. The second GPU outputs only to the HDMI port. As it happens, one of my two monitors does have an HDMI input port so it was easy. Just plugging in a cable without a monitor didn’t work.

Now while it solves the problem for this particular workstation I’m also considering building a dedicated computation box with 3xGTX 295:s. As things look now it would seem that I have to connect six monitors to it in order to get all the GPU:s working. This seems quite absurd, needlessly expensive and very impractical.

Now my question is this: Is this a Vista/Windows 7 issue or is it also true for XP? Can you under XP run multiple GPU:s without plugging monitors into them?

Given the fact Tesla cards are working in XP (and not in Vista) you can use GPUs in XP that do not have a monitor attached. I have seen people with some special connectors on their cards to trick the OS into thinking there is a monitor attached.