detecting if a gpu is driving a display

So this may have been answered elsewhere, but I’m having a hard time finding the answer…

Is there a simple way to detect if a gpu (in a multiple gpu machine) is driving a display from a command line or API perspective? The only thing I can think of is to query the devices and compare available memory, though this would only work for specific known configurations, not in the general sense.

The idea is that I want to run a CUDA-enabled job on a remote machine, but I don’t want to run on a gpu driving the display if possible.

Thanks,
Byron

In CUDA 2.1 you can query if a GPU has the watchdog timer enabled.

You can also use NVAPI to get a lot of information like this:
[url=“http://developer.nvidia.com/object/nvapi.html”]http://developer.nvidia.com/object/nvapi.html[/url]

Is there any documented way of mapping NvPhysicalGpuHandle (or such) to corresponding CUDA daveice ID?

Don’t think so–NVAPI and CUDA are completely separate from one another.

I do think so, too. But Simon’s post gave me some hope that things have changed since I’ve looked at NVAPI docs =)

Hi!

I have found that nVidia uses nvapi.dll in nvcuda.dll. It’s easy to check with any hex viewer by searching for nvAPI library name in so called “CUDA driver”. :)
Here must be some mapping between NvPhysicalGpuHandle and CUdevice!