SLI is turned off (“Set PhysX GPU acceleration”=Disabled and “Select the multi-GPU configuration”=Do not use multi-GPU mode").
I’m using 181.71 x64.
I’m not aprogrammer, so I need specifics from a user’s perspective on what’s required to ensure Cuda recognizes both gpus. I’ve seen several issues on this also for Windows Server 2008 and Vista on BOINC, so it seems to be quite a few that are not finding proper descriptions/utilities/routines to make Cuda utilize both cores.
It doesn’t make a difference, unfortunately. I turned it off as one reported this to have all his cores seen by cuda in a 2x physical/4x core setup. Turning it back on has no effect.
I have also tested this to no avail:
In the registry, go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Video(GPU ID)\0000 (you might also have to hit 0001 on a D870 or Plex D2)
add the following two DWORDs and set them to 1:
DisplayLessPolicy
LimitVideoPresentSources
There must be some registry setting or tweak that enables cuda to use both cores…? This feels like beta-land…
My beta-land comment is for the lack of instructions/documentation on how to control the CUDA multi-gpu behaviour on Windows. There is NO reference to this in Nvidia Control Panel.
As I mentioned initially, this is a problem for Vista and Windows Server 2008 users alike.
Problem is there is no information from Nvidia on how to control this behaviour at all, and we need this.
The reg keys fix it for Vista–I know that this work because I have done it many times with multi-GPU systems. I don’t know if it works on Win7 because I’ve never used Win7.
also, if you’re only trying to enable a second headless GPU for CUDA, the PhysX option should do it (because it just sets that reg key for a single GPU).
I finally got a HDMI-adapter and did several tests before I got it to work using these steps:
1: disabled one of the GPUs in Windows Device Manager
2: enabled the same GPU
3: ensured a new monitor was listed in Windows display settings, then selected the GTX 295 GPU in the drop-down list and ensured it was extended (it was extended automatically)
4: started BOINC and saw CUDA recognized both GPUs
To make matters even more Nvidia-beta, CUDA reverts back to seeing only one gpu when the monitors are disconnected and computer restarted. How to make the settings stick???
How is NVIDIA dealing with this messy handling of multi-gpu configuration on Windows Server 2008, Vista and Windows 7?
It seems coincidental that some have no problems while others work long hours to enable this basic functionality.
Like tmurray said, there is a Vista registry setting that fixes the problem on that platform. I imagine it would also work fine on Server 2008 since they are (underneath the hood) just about the same. As for Windows 7…remember that you’re using a beta operating system. I don’t think the blame should be placed on nVidia for that unless you could totally count out any Windows 7 bugs.
I’m sure that nVidia will have the problem worked out on Windows 7 by the time is available to the public. I understand that at some point, they will be adding a settings panel of some kind to make the registry changes via a GUI instead of having to use regedit.
EDIT: I wanted to add that (AFAIK) the whole reason this is a problem in the first place is due to some restrictions in WDDM that come about due to Microsoft trying to ensure stability of the graphics device and drivers.
Apparently CUDA computing on Windows is still beta when it comes to multi GPU cards on Vista/Server 2008 - on XP this never occurs. Single GPUs work like a charm no monitor ever needed.
This is frustrating, as this is also experienced by Windows Server 2008 and Vista users, but no one knows why some never experience this while others do. One Windows Server 2008 user even made this work without any monitor attached…
This being a Nvidia forum, I expected someone to point to a technical, and underlying mechanism that could be tweaked/checked - or at least an explanation what causes some to have problem, while others do not - on Windows Vista/2008 AND Windows 7.
For the time being I must use a dummy plug in HDMI port, and hope there are no power outages, as that requires manual intervention in the form of pulling the dummy plug out and pushing it in again in order for Windows to see it’s there. The same if there is a monitor attached, so this is not a dummy plug issue. Windows should know there is a monitor attached to the HDMI-port during startup, but alas… This is most likely a Windows 7 beta issue that will be resolved in RC or RTM.
I have told you how to solve this with 18x.xx drivers or later on Vista/Server 08. I know it works because I have put a whole bunch of Tesla cards on one machine and it works fine. I also know it works fine with cards with display outs because I’ve done it with Quadros too. According to you, this doesn’t work in Win7. Win7 is a beta. When Win7 is no longer a beta, it will work because I will sit down and test it myself and get it fixed before you see drivers if it’s not fixed.
The forum post you linked to doesn’t need a “dummy plug” or anything like that–it needs to set the appropriate registry keys for all devices.
Sarnath: This will never happen because it’s a terrible idea. You cannot have separate CUDA and graphics drivers if you ever want interop without hitting much more severe limitations than what WDDM imposes already. So, what you’re asking for is one driver if there’s a display connected and another driver if there’s not. You should be able to see pretty clearly why this is a recipe for complete and utter disaster and will never happen.