How to direct computation to a specific Gpu

I have a 8800 GTX and a Tesla C870 installed in my Linux box and I have just compiled the CUDA SDK. All the examples/executables that I tried worked just fine. I am trying to find out how I can determine in which gpu an application is run, the GTX or the Tesla. Also, is there a way to force the application to run in the GPU of my choice either in the command line or on the code ?

Below are the results of nvidia-smi and multiGPU showing that both Gpus are found and seem to be working properly.


  • Alosca, a newbie.

cell [250] : nvidia-smi
Gpus found in probe:
Found Gpuid 0x1000
Found Gpuid 0x5000
Attaching all probed Gpus…OK
Getting unit information…OK
Getting all static information…

cell [251] : ./multiGPU
2 GPUs found
Processing time: 439.656006 (ms)

Press ENTER to exit…

I got the answer to my second question: checking the multiGPU code it shows how to target a device:

static CUT_THREADPROC gpuThread(int * device)

Now I need to know how to do the same with an arbitrary executable. Maybe use an environment variable ? I am working in Linux…