FYI: Windows 7 can use cards from multiple vendors

Just thought I’d put a quick post out there, since there’s been a bit of confusion on the forums about running a machine with both nVidia and ATI hardware. I’ve been working with nVidia cards for about 18 months now, but I picked up some ATI hardware today for testing purposes.

The machine I’m using for this setup has the following specs:
Windows 7 Ultimate x64
Intel Core 2 Duo E6600
4GB DDR2-800 (Crucial or Kingston, I can’t remember which)
Gigabyte GA-EP45-UD3P (1 PCIe 2.0 x16, 1 PCIe 2.0 x8)

I’ve been running the 195.62 drivers and CUDA 2.3 toolkit/SDK on this machine for testing purposes with an EVGA 8800GT. It was the only card in this machine.

Since I only had one slot, and a fairly small power supply in the machine, I tried the smallest card (MSI R5850-PM2D1G OC Radeon HD 5850 1GB). I moved the 8800 GT to the x8 slot, and installed the 5850 in the x16 slot.

This machine only has one monitor (usually), so I connected that to the 5850 (I figured I could try the single-machine debugging workaround with Nexus). I booted Windows, installed the latest ATI driver suite (the Dec '09 version of Catalyst), and restarted. I tried to open the nVidia control panel, and got an error message complaining about no monitors being attached to the card (then it closed).

So, I shut down the computer, dug an old monitor out of the closet, plugged it into the 8800 GT and booted to Windows. The extra monitor turned on when I got to the login prompt, and the desktop extended to it automatically. The nVidia control panel seems to work now (no error messages), but I haven’t tried anything crazy with it (yet!). I ran a few of the pre-compiled CUDA SDK binaries, and they work just fine.

I haven’t tried OpenCL yet (which is the point of this experiment), but I’ll do so later tonight after I install the CUDA 3.0 SDK.

The only “weird” thing I’ve noticed is that if you open the nVidia control panel, and click the System Information link, the dialog doesn’t show any devices present in the left pane (where it would normally list all of the nVidia devices in the system), and the blank entry that is selected there shows the correct driver version but 0MB memory, IRQ 0, and Bus: “Unknown Bus”. It also says DirectX 11 (but I don’t remember if it said this before, or if it’s due to something the ATI driver changed).

Hope this clears things up a bit about running both ATI and nVidia devices in a single machine, something that I think will appeal to developers doing any commercial work with OpenCL. If you have any questions, let me know and I’ll try to answer them.

Updates on individual SDK examples (CUDA SDK 2.3):

    All of the text-only examples seem to work just fine right now.

    OceanFFT: When launching the program, it quits out with an error message “error on line 313 of oceanFFT.cpp”. Looking at the code with the line in question bolded:


// First initialize OpenGL context, so we can properly set the GL for CUDA.

// This is necessary in order to achieve optimal performance with OpenGL/CUDA interop.

if(CUTFalse == initGL( argc, argv )) {




// Cuda init

if( cutCheckCmdLineFlag(argc, (const char**)argv, "device") )

    cutilGLDeviceInit(argc, argv);


    cudaGLSetGLDevice( cutGetMaxGflopsDeviceId() );

// create FFT plan

<b>CUFFT_SAFE_CALL(cufftPlan2d(&fftPlan, meshW, meshH, CUFFT_C2R) );</b>


I don’t think that the FFT plan would have anything to do with the error, so it’s probably catching an error from something above it. Which is strange because the code checks for OpenGL initialization errors…

The imageDenoising project crashes with an error on line 564 of imageDenoisingGL.cpp, which has similar code:


// First initialize OpenGL context, so we can properly set the GL for CUDA.

    // This is necessary in order to achieve optimal performance with OpenGL/CUDA interop.

    initGL( argc, argv );

// use command-line specified CUDA device, otherwise use device with highest Gflops/s

    if ( cutCheckCmdLineFlag(argc, (const char **)argv, "device"))

        cutilGLDeviceInit(argc, argv);


	    cudaGLSetGLDevice( cutGetMaxGflopsDeviceId() );

cutilSafeCall( CUDA_MallocArray(&h_Src, imageW, imageH) );


It’s catching the cudaGLSetGLDevice error. (funny you post this right now…)

Hah, were you just working on it or something?

In any case, perhaps the cudaGLSetGLDevice could be wrapped in cutilSafeCall()? I’m curious to know if it’s just something to do with the OpenGL driver, since the console-based SDK examples work just fine. Would it matter if I tried to get the program window to open on the screen attached to the 8800 GT? (Right now it’s coming up on the screen attached to the ATI card.)

EDIT: Tim, did you happen to look at that email I sent you the other day about the PTX guide?

Yeah, I was looking at another time with that call can return an error on Win7 literally 30 seconds before I read that post. It should work if it was coming up on the NV adapter’s display.

I did read your email and forwarded it to the appropriate people, but I haven’t heard back either yet. Everybody’s busy right now…

Have you tested this setup with Nexus?

What I want to do is local debugging with Nexus on a GeForce 260 GTX with an ATI Radeon for the screens.

I don’t have the GeForce 260 GTX yet. So it would be nice if someone could test this for me before I spend the money for the 260 GTX.

I have some doubts because Nexus wants a GPU on its own for debugging. So attaching a screen to the nVidia card doesn’t sound like a good workaround for using Nexus to me. However, when no screen is attached to the nVidia card, CUDA doesn’t find the GPU (with my non-Nexus-compatible GeForce 8800 GTX which is currently in my system).