Problems running CUDA on non-primary display

I have installed a GeForce 8800GTX board on an Intel branded ATi motherboard with ATi integrated graphics. When I installed CUDA, the ATi graphics were disabled, and the 8800GTX was the primary monitor. I read on other parts of the CUDA forums that it’s best to use another card for your Windows Desktop and reserve the CUDA card for CUDA calculations - otherwise your system may hang during long kernel executions. So I installed the ATi drivers, and then told Windows to use the ATi graphics as the primary monitor, and not to extend the Windows Desktop to my 8800GTX.
It seems that doing so messed up my system - the Windows Display control panel now reports that “The currently selected graphics driver cannot be used. It was written for a previous version of Windows, and is no longer compatible with this version of Windows. The system has been started using the default VGA driver.”
This is a mistaken error message, as far as I can figure out, since the driver versions are listed correctly in Control Panel->System->Hardware->Device Manager, and the device manager reports that both display adapters are working properly.
Whatever the cause of this error message, CUDA fails to find the 8800GTX at runtime and therefore won’t run anymore.
I’m going to disable the ATi graphics and attempt to use the 8800GTX as my primary display again, so that I can continue with my CUDA programming, but I was hoping someone on this forum would know if there is a solution to this problem, or whether this arises out of inherent incompatibilities between the ATi drivers and the nVidia drivers. It would be nice if I could reserve my 8800GTX for CUDA computations only.

Thanks!

Hy

I’ve experienced the same thing. Then I’ve desactivated my ATI board, rebooted, then reactivated it and selected it to be my primary monitor with the GTX the secondary one, then everything is OK. if I not extend the desktop to the GTX, the execution time seams to be faster but the data read from the GTX memory are always 0!

Hope it will help…

D. Houzet

Hi,

I think my question can also be posted under this topic. :) I am using a Geforce 6800GT as the primary graphics card for displaying and a 8800 GTX for computation. And the 97_73 driver is installed for both of them.

Although the “unspecified launch failure” error message does not appear again, the computation results are not always correct. It seems when the device runtime
is beyond 5 sec, the results are all “0”, when it is within 5 sec, results are correct.

Can someone help me fix this problem? Thanks

Quoting the CUDA release notes:

The watchdog timeout for GDI operations is 5 seconds.

Peter

Hi Peter,

I have used another Nvidia card (6800 GT) as my primary display adapter and 8800 for computing. Do you mean in this case the 5 sec limitation still exists?

Haven’t tested that with XP. After all, CUDA talks to the card via the NV driver. If the watchdog hooks in there, the driver might not react correctly. (You do see the 8800 in the control panel after all, right? Even though it is deactivated.)

I can report that on Linux the 5 sec limit only applies if the 8800 is running the desktop. To run CUDA, you actually don’t even need a X11 desktop at all in the machine. So the 8800 can be the only card. Just make sure you load the nvidia kernel driver on startup (modprobe in boot.local) and CUDA runs fine on text-only server systems without timeout.

Peter

Thanks Peter. I have done some tests under Windows XP and found this problem does exist even G80 is not used as the primary display card. I have posted my problem on the “CUDA programming and developement” branch.

To answer the ATI/NVIDIA questions: I don’t think it’s legal to have multiple display drivers installed simultaneously in Windows XP. What you are trying to do is install an ATI driver, then install an NVIDIA driver, which disables the ATI driver, then make the ATI GPU your primary display adapter, which won’t work with the NVIDIA driver.

You need to use a different NVIDIA GPU as your primary display adapter, like the guys with 6800s are doing.

Mark

I’ve had good success running a GeForce 8800 GTS as the Cuda GPU, and a GeForce FX 5200 as the (primary) video display under Windows XP Pro SP2 on a Dell Precision 360.

The GPU runs kernels > 5 seconds with no problems. Works like a champ!

Note: The GeForce 8800 GTS is PCIe 16x; the FX 5200 is straight PCI. That might have something to do with it.

Here’s what worked for me – though no guarantees:

  1. Install the FX 5200 FIRST as the ONLY video adapter, and get it working with the Cuda drivers (currently 97.73). [This may require disabling any on-board video cards in the machine.] THIS STEP IS VERY IMPORTANT!

  2. If your BIOS has a mechanism for selecting the PCI graphics card (not PCIe) as the default card, do so.

  3. Power down the machine, and install the GeForce 8800 in the PCIe slot.

  4. Grab your lucky charms and power up in Safe Mode… External Media The very brave can try booting directly to regular Windows mode and skip to Step 8… External Media

  5. Under Settings->Control Panel->System->Hardware->Device Manager->Display Adapters, right-click on the GPU card and select “Disable.”

  6. Power down the machine. Reboot in regular Windows mode.

  7. If all goes well, Settings->Control Panel->System->Hardware->Device Manager->Display Adapters, right-click on the GPU card and select “Enable.” This might cause things to flicker a bit… <img src=‘http://hqnveipbwb20/public/style_emoticons/<#EMO_DIR#>/crying.gif’ class=‘bbc_emoticon’ alt=‘:’(’ />

  8. If all goes well, right click on the Desktop, select Properties->Settings. Make sure FX 5200 (or whatever card you want for your video card) has the “Extend my Windows desktop onto this monitor” box CHECKED, and the card you want as your GPU has this box UNCHECKED. (You select the proper cards using the Display dropdown.)

  9. I was able to run Cuda kernels at this point, with no 5-second limitation. :magic:

Thanks jhanweck. I will try your method. Did you try running your program much longer than 5 seconds for example, 1 minutes and check the results? I have tried linux and still got the garbage results when running time is more than such as 20 seconds. I think it’s a bug in the driver.

Archer, after further testing, I’m afraid you may be right… :(

Two problems I’ve encountered with this:

  1. After rebooting, the driver sometimes “forgets” which displays are active and inactive. For instance:
  • Before reboot: the FX 5200 is display 1 and 2, with 1 as the primary monitor, and desktop extended to these displays; and the 8800 GTS is display 3, with the desktop NOT extended to these displays; things work ok.

  • After reboot: the displays are shuffled around! The driver assigned displays 1 and 3 to the FX 5200, display 2 to the 8800 GTS, and extended the desktop to all of these! At this point, CUDA will not work…

This can be fixed temporarily by Settings->Control Panel->System->Hardware->Device Manager->Display Adapters; right click the 8800 GTS, and disable it. Then, reboot. The FX 5200 should be fine. Then, Settings->…->Display Adapters, right click the 8800 GTS, and enable it. Back up and running… though this is certainly not ideal.

  1. I haven’t run a program more than 20 seconds, but I am encountering some “instabilities” in the results on shorter kernel runs. The first run is fine, the second run is messed up, the third run is fine again, the 4th is messed up… I’m still investigating… will post when I have more.

All the above is quite a hack anyway. I’m hoping as CUDA and the drivers become more stable, all this will be unnecessary.

Archer, are you using texture memory in your application?

The reason I ask is I’m running into some memory allocation bugs, and they seem to manifest when texture memory is used.

Archer, I’ve run into a strange bug that could be related to your problems:

I have a piece of code that gives different results on every other run. (It’s a fairly long program so I can’t include it here.)

There’s a global (kernel) function in the code that I’m not currently using… it’s NEVER called.

If I comment out that function, the program works perfectly every run!

If I put it back in, the program works only on every other run.

Since the program works under emulator in either case, I suspect something amiss in the compiler. Until I can narrow it down to something simpler, it’s hard to say.

I did not use texture memory in my application.

Hi jhanweck, above is a test program I used. The correct results should be positive intergers within [2, 10). You can try it.

Archer,

I ran your example compiled with -D_DEBUG.

With the loop bounds at 1000 each, it ran without trouble.

Changing the loop bounds to 10000 each, it ran for 6.4 seconds and died with:

Cuda error: Kernel execution failed in file ‘cuTestArcher.cu’ in line 62 : unspecified launch failure. [Line 62 is the kernel call.]

That explains the zeros…

Changing the loop bounds to 20000 each, the kernel ran for 25.6 seconds, and terminated with the same error.

This is different behavior from running on the primary display card; in that scenario, if the kernel runs for more than 5 seconds or so, it typically hangs the machine.

So, I suspect something amiss in the driver, not the OS.

[edit] Linux has troubles, too. [url=“http://forums.nvidia.com/index.php?showtopic=30575”]http://forums.nvidia.com/index.php?showtopic=30575[/url]

Also, when running Archer’s test code, my CPU (not GPU!) usage hits 50% and sticks there until the app is finished.

Why is CPU usage so high when the GPU is doing all the work???

I filed a bug on this for Linux already. I don’t know the cause, but I believe the CPU load is probably caused by the CUDA runtime spinning on a mutex or something like that. Presumably they’ll have this fixed in a subsequent beta version. Old OpenGL drivers used to do similar things some years back, so I’m sure this is easy to solve, they just need time to do the work most likely.

John

It seems I’ve stumbled into a similar problem when trying to run CUDA on a secondary monitor while rendering with OpenGL. Although my machine only has one graphics card installed - 8800GTX. Two monitors are connected to the graphics card. When running simpleGL or postProcessGL (the examples delivered with the SDK), they start up gracefully on my main display. However, when dragging from the primary monitor to the secondary monitor, the secondary monitor only shows a black area. And when having completed the drag (the whole window within the secondary monitor), the computer locks and has to be rebooted when running simpleGL. When running postProcessGL the application simply exits when completing the drag.

The software I’m developing needs to be able to render to a secondary monitor in order to be useful. Has anyone experienced something like this? Is there a solution?

I have made an application which when run with one monitor connected works great, but when connecting a second monitor the application only displays a black window. This is even without dragging the window to the secondary monitor. I will debug this further to see if it is my fault or is caused by the same issue.

Thanks,
Jørn

Using display driver version: 97.73
CUDA SDK version: 0.8.1
OS: Windows XP Professional (SP2)
Graphics card: 8800GTX

PS: Many thanks to NVIDIA for releasing CUDA. Just what I needed!