Reading GPU temperatures in code

Does anyone know if it is possible to monitor a GPU’s temperature from within code?

I’m working on a CUDA application that runs on one or more GPUs (depending on configuration). I’ve found it useful to watch the GPU temperatures in the “NVIDIA Monitor”, but I would like that same information in my code. The code runs as a remote service, and it would be nice to be able to report back to the client the temperature (actually the entire state) of the GPUs (since this will be running on a machine in another room).

So, is this possible? Anyone have any pointers on where to look for the information? Can I get to it through the Windows WMI?


There’s nvclock (Linux) discussed here:

For Windows, maybe you should take a look at this:…olPanel_API.pdf