Multi-gpu scheduling on windows looking for an analog of linux nvidia-smi

Hi there,
we use nvidia-smi with exclusive compute mode for 2 GPUs on Linux (i.e. there can only run 1 job on 1 gpu and if you don’t call cudaSetDevice the driver tool schedules the gpu usage until there are no free gpus available anymore).
However, we also have 2 GPUs (half of a Tesla S1070 system) on windows (HPC Server 2008) and want to make them available for multiple users. Now I am wondering how you could solve this multi-gpu scheduling there. Is there an analog to this nvidia-smi tool?

If you use the TCC drivers for HPC Server 2008, the GPUs are in exclusive mode by default. Additionally, nvidia-smi is going to be included with TCC drivers soon.

We tried the TCC driver first, but it did not work with OpenCL, doesn’t it? Thus, we switched to another driver (which can be found by selecting Tesla -> GPC Comuting Processor -> C1060 -> Win 7 64Bit) and there is no exclusive mode (on my experience).
So, is that right, that if we would use TCC driver then every user gets an error when trying to get a GPU which is already in use (as there is no scheduling)?

And do you know when approx. nvidia-smi will be included with TCC driver? Will the scheduling work the same than under linux (i.e. if your do not set explicitly a device in your cuda code, your program can be schedule automatically)?

Do you have any suggestions how we could use exclusive mode and also make OpenCL available?
Or is OpenCL included in TCC driver soon, too?

I still would like to know whether OpenCL will be available in the TCC driver for Win 2008 Server soon?