What does "graphics interoperability" mean?


I have a dual 8800 / dual monitor system and encountered the message, “Graphics interoperability on multi GPU systems currently not supported,” after running the fluidGL and other examples. Searching the forum I realized that I could simply prevent the call to isInteropSupported() from causing the application to abort. That did not work initially, but after enabling SLI in the NVIDIA control panel, I was able to get the examples to work.

While I have an idea as to what graphics interoperability means, I am wondering what the “official” definition is? Also, can someone explain why it is that if graphics interoperability isn’t supported, according to the isInteropSupported() function call, programs will work if the call to that function and logic are simply removed?


It’s defined in the programming guide. Basically, it allows CUDA to read/write OpenGL or Direct3D buffers.

Enabling SLI prevents CUDA programs from utilizing both GPUs, which I presume is your intention for buying two 8800 cards.

Yes, this was the intention. However, there are some areas that confuse me in regard to using CUDA for graphics-based operation with regard to the enabling/disabling of SLI:

  1. The documentation states at the end of chapter three, “If the system is in SLI mode however, only one GPU can be used as a CUDA device…”

  2. The readme file states, "For graphics interoperability, D3D or OpenGL must be running on the same GPU as the compute context. As a result, graphics interopability does not work on systems with multiple GPUs installed.

This seems to imply that the system needs to be in SLI mode in order for interoperability to work, but I am puzzled as to why SLI cannot be disabled when trying to execute the demo apps. Every test that I have run gives a gray screen. Can OpenGL be configured through code to only use one of the GPUs so that both GPUs could be utilized? If not, it really seems to defeat the purpose of having two cards installed, or does it?

Please understand that this is my first week exploring CUDA, and I apologize if my questions may seem naive.



Hi Brian,

As mentioned in the documentation, interoperability between CUDA and OpenGL works only on one GPU. Multi-GPU configurations do not support this. When you enable SLI, the driver sees both GPUs as a single GPU (as mentioned in the doc), so interoperability works. But this comes at the expense of CUDA also seeing only one GPU, which is not what you want. As far as I know, SLI cannot be enabled/disabled from the application.

Here are some scenarios where two or more cards can be fully utilized:

  1. Run OpenGL-only applications with SLI enabled.

  2. Run CUDA-only applications with SLI disabled. See Multi-GPU example in the SDK.

  3. Run applications that have both CUDA and OpenGL but there is no exchange of buffers on the cards (i.e. no graphics interoperability).

What scenario do you expect your application to run in? How do you intend to use both GPUs?

Most likely the first scenario is the one we would use. Our potential usage looks to be centered on two areas: image processing and particle effects. While image processing would certainly (it seems) need to use the first method listed, I am not as certain about whether particle effects would require it. I have some other overall concerns about using CUDA to do particle effects, but that is being handled in another forum discussion. At this point, I am more in the “investigation mode” and honestly do not know how we would use the additional GPU.

At present, the major downside I can see people having with the first scenario is that with SLI mode enabled, it seems that only one monitor can be enabled. Every attempt at configuring two monitors with SLI enabled has only met with failure. Perhaps I’m doing something completely wrong – I don’t know.