Multi GPU not recognized by Vista !

Hi,

I have 3 GTX 280 cards in my system with Win Vista-32 bit edition. I have CUDA 2.1 installed on it. I made a small program to show the number of GPU devices on my computer. The cards are not connected in SLI. They have been placed properly in the PCIe slots too. The command cudaGetDeviceCount returned a value of 1. I have another system with 2 GTX 285 GPU cards. It has CUDA 2.1 and it has Win XP. By using the same command, it recognizes my 2 cards and it returns a value of 2. I need some suggestions on this issue.

You can try this registry patch I had lying around (rename to .reg). Sets two keys for three devices. Run that, reboot, and try again. Also, please don’t use CUDA 2.1, use 3.0b1 or at least 2.3.
Enable_Non_Display_CUDA.reg.txt (1.17 KB)

I am using VS.net 2003 or in other words VC++ 7.1. So, if I install anything greater than cuda toolkit 2.1, it has compilation problems. The following is the error:

c:\CUDA\include\host_config.h(114): fatal error C1083: Cannot open include file: ‘crtdefs.h’: No such file or directory

I think I have to use VS 2005 compulsorily, in order to use cuda toolkit 2.3 or 3.0b1. But, I am more comfortable working with vs.net 2003. So, if there is a way by which I can get rid of the above error in vs.net 2003 which arises just because I change the cuda toolkit version, then I think I can proceed. Please let me know your comments/suggestions on this whole issue of VS compatibility with CUDA toolkits.

Wohoo! A secret key. Adding that to my keychain now.

Christian

It’s not really secret–I posted something very much like this months ago, but I only figured out the class GUID instead of a million different video GUIDs a few weeks ago. This probably only works on Win7, too (WDDM 1.1 is required)

edit: I am a big liar, WDDM 1.1 is only required for GTX 295 to work, this will work fine on anything else

The following are my attempts on the CUDA SDK project called as simpleMultiGPU:

Installed VS 2008.
Installed CUDA 2.3 SDK and toolkit.
Ran the registry file given by tmurray.
Restarted the system.

The project compiles absolutely fine. But, as usual it shows the number of GPUs as 1.

I repeated the same with CUDA 3.0b1 SDK and toolkit too. There is no difference in the result.

I don’t know what else I can do, apart from simply concluding that CUDA is incapable of recognizing multiple GPUs on Win Vista, and opting for Win XP. But, I would like to hear some suggestions or opinions on the issue.

One issue to report though:

NVAPI no seems to report a Mem Used counter for non-display CUDA cards - or let’s state it this way: GPU-Z 0.3.8 no longer shows memory use in this case.

One more issue: I have two monitors and had to extend the desktop two two additional dummy plug monitors to fold. Once this registry entry is added, I can no longer run two monitors on the system. Is there an adjustment to this that can allow me to run two monitors and have folding use of the two other non-monitors?

This reg key disables dual monitors because one monitor gets enabled as a fake monitor that ensures the card is visible by CUDA. What you need to do is figure out which entry corresponds to your actual display card (or cards) and remove the new keys from that entry, leaving the keys everywhere else. I don’t really know how to do this without getting rid of it in one entry, rebooting, and making sure you have the same number of CUDA devices available.

This is still a big pain and there’s not any good way to do it at the moment, but I might be able to improve it sometime.

I formatted the whole machine, and installed a fresh copy of WinXP SP3. All three cards are recognized. I installed MS VS2008 and the cards in my machine are GTX 280. Now, I simply wanted to test the performance of MultiGPU on my machine, so, I run the simplemultiGPU project from the CUDA SDK3.0b1. And the test failed despite having 3 GPUs on the machine. However, on another machine with 2 GTX 285 cards and WinXP, but with MS VS2003.net and CUDA 2.1 SDK passes the test successfully. The screen shots of the tests on the 3 GPU machine and the 2 GPU machine are shown below. I am really confused how can this even happen.

External Image

External Image

If you have a close look at the result you will recognize that the error is still small, although it exceeds an arbitrarily chosen “epsilon” value.

I would not be too concerned about this.

Maybe run a GPU memtest on all three GPUs to verify that the memory runs stable.

Christian

If you mean to run the bandwidthtest, then here is the screenshot of the result from that test:

I don’t know what does the error in the above screen shot mean. Any ideas ?

Coming to the multiGPU speed test failure on 3 GPU machine, I know that the error is small, but I am unable to comprehend the fact that 3 GPUs cannot process faster than a single CPU core. If there is any limitations, like for example, only certain types of apps will work on multiGPU or only certain type of parallel operations can only show speed up, then I would like to know more about it, and I believe that nVIDIA gave this example in their SDK to make people believe that there will be a speed up when multiple GPUs are used in comparison to a single CPU core, atleast for the simpleMultiGPU app that they provided in their own SDK.