GTX 295 on Windows 7 Cuda unable to recognize second GPU


I’m unable to find a way for Cuda to recognize and use both GPUs on the GTX 295.

Nvidia Control panel sees both gpus, but only one reported by Cuda.

There is no Nvidia utility I know of that I can use to properly handle this - no option in Nvidia Control Panel.

How do I enable this this?


Turn off SLI.

edit: actually, what driver version are you using? I don’t remember what we released for Win7 (I haven’t used it yet)

edit 2: nevermind, we seem to have a 181.xx driver out, which does have offscreen support in WDDM. I presume it’s there in Win7 too.


SLI is turned off (“Set PhysX GPU acceleration”=Disabled and “Select the multi-GPU configuration”=Do not use multi-GPU mode").

I’m using 181.71 x64.

I’m not aprogrammer, so I need specifics from a user’s perspective on what’s required to ensure Cuda recognizes both gpus. I’ve seen several issues on this also for Windows Server 2008 and Vista on BOINC, so it seems to be quite a few that are not finding proper descriptions/utilities/routines to make Cuda utilize both cores.


“PhysX acceleration” is a fancy term for CUDA on a non-display device, so turn that on.


It doesn’t make a difference, unfortunately. I turned it off as one reported this to have all his cores seen by cuda in a 2x physical/4x core setup. Turning it back on has no effect.

I have also tested this to no avail:

  • In the registry, go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Video(GPU ID)\0000 (you might also have to hit 0001 on a D870 or Plex D2)

  • add the following two DWORDs and set them to 1:



There must be some registry setting or tweak that enables cuda to use both cores…? This feels like beta-land…


Win7 IS beta-land. I’ve never used it and have no idea how it works.

My beta-land comment is for the lack of instructions/documentation on how to control the CUDA multi-gpu behaviour on Windows. There is NO reference to this in Nvidia Control Panel.

As I mentioned initially, this is a problem for Vista and Windows Server 2008 users alike.

Problem is there is no information from Nvidia on how to control this behaviour at all, and we need this.

Here is another post from 4 March with Vista x64:…mp;#entry525319

How does he resolve this?


The reg keys fix it for Vista–I know that this work because I have done it many times with multi-GPU systems. I don’t know if it works on Win7 because I’ve never used Win7.

also, if you’re only trying to enable a second headless GPU for CUDA, the PhysX option should do it (because it just sets that reg key for a single GPU).

I finally got a HDMI-adapter and did several tests before I got it to work using these steps:

1: disabled one of the GPUs in Windows Device Manager
2: enabled the same GPU
3: ensured a new monitor was listed in Windows display settings, then selected the GTX 295 GPU in the drop-down list and ensured it was extended (it was extended automatically)
4: started BOINC and saw CUDA recognized both GPUs

To make matters even more Nvidia-beta, CUDA reverts back to seeing only one gpu when the monitors are disconnected and computer restarted. How to make the settings stick???

How is NVIDIA dealing with this messy handling of multi-gpu configuration on Windows Server 2008, Vista and Windows 7?

It seems coincidental that some have no problems while others work long hours to enable this basic functionality.


Like tmurray said, there is a Vista registry setting that fixes the problem on that platform. I imagine it would also work fine on Server 2008 since they are (underneath the hood) just about the same. As for Windows 7…remember that you’re using a beta operating system. I don’t think the blame should be placed on nVidia for that unless you could totally count out any Windows 7 bugs.

I’m sure that nVidia will have the problem worked out on Windows 7 by the time is available to the public. I understand that at some point, they will be adding a settings panel of some kind to make the registry changes via a GUI instead of having to use regedit.

EDIT: I wanted to add that (AFAIK) the whole reason this is a problem in the first place is due to some restrictions in WDDM that come about due to Microsoft trying to ensure stability of the graphics device and drivers.

Apparently CUDA computing on Windows is still beta when it comes to multi GPU cards on Vista/Server 2008 - on XP this never occurs. Single GPUs work like a charm no monitor ever needed.

This is frustrating, as this is also experienced by Windows Server 2008 and Vista users, but no one knows why some never experience this while others do. One Windows Server 2008 user even made this work without any monitor attached…

This being a Nvidia forum, I expected someone to point to a technical, and underlying mechanism that could be tweaked/checked - or at least an explanation what causes some to have problem, while others do not - on Windows Vista/2008 AND Windows 7.

For the time being I must use a dummy plug in HDMI port, and hope there are no power outages, as that requires manual intervention in the form of pulling the dummy plug out and pushing it in again in order for Windows to see it’s there. The same if there is a monitor attached, so this is not a dummy plug issue. Windows should know there is a monitor attached to the HDMI-port during startup, but alas… This is most likely a Windows 7 beta issue that will be resolved in RC or RTM.

The dummy plug is becoming a defacto standard for those experiencing problems on Vista/2008 (Just one example:…ap=true#881186) - what’s Nvidia’s contribution to a solution?


Just my usual rant:

Thats why CUDA must have a life of its own and should not be related to graphics… though some CUDA devices can be graphic devices…

Here, Here and then some :D



I have told you how to solve this with 18x.xx drivers or later on Vista/Server 08. I know it works because I have put a whole bunch of Tesla cards on one machine and it works fine. I also know it works fine with cards with display outs because I’ve done it with Quadros too. According to you, this doesn’t work in Win7. Win7 is a beta. When Win7 is no longer a beta, it will work because I will sit down and test it myself and get it fixed before you see drivers if it’s not fixed.

The forum post you linked to doesn’t need a “dummy plug” or anything like that–it needs to set the appropriate registry keys for all devices.

Sarnath: This will never happen because it’s a terrible idea. You cannot have separate CUDA and graphics drivers if you ever want interop without hitting much more severe limitations than what WDDM imposes already. So, what you’re asking for is one driver if there’s a display connected and another driver if there’s not. You should be able to see pretty clearly why this is a recipe for complete and utter disaster and will never happen.

How is this registry fix being presented in coming Nvidia Control Panel or other GUIs? When will this be made available?

It is due to missing information/documentation and GUI settings that there is such confusion on this issue.


You’re preaching to the choir, but I have no direct control over the control panel. I am trying to get this done, though.

@Tim, Never mind. I know this will never happen and how much it can disturb a product company… Just my phantom wish… Dats all.

but time and again, when some1 says “Extend the desktop into tesla card to make it recognized as CUDA device” etc… sounds very funny.

But u said it is fixed with some driver. If thats so, then its goooood! Thanks,

I added ony the LimitVideoPresentSources as binary “01 00 00 00” on the first two “GPU ID” strings and was able to get four cuda devices to show up.

i have only one monitor, and in display settings i have four under intensified monitor icons. now both blue leds are on.

edit: my suspision was right, only the first two are required. power management works with just the two.

…And you made this work on Windows 7?