Hello -
I found this thread [url=“The Official NVIDIA Forums | NVIDIA”]http://forums.nvidia.com/index.php?showtop...rt=#entry577548[/url]
but I don’t think it is addressing what I need specifically.
I have a machine running Ubuntu that had a Quadro FX 5600 Card in it.
Recently I have attached ONE side of a Tesla s1070 (I only have 1 PCIe x16 slot).
After installing all the drivers and getting things set up
I notice in
/dev i have nvidiactl, nvidia0, nvidia1, nvidia2
How do I make sure what device to go to? (I got an answer from the previous thread I think) but more importantly what happens if I do nothing? do things need to be re-compiled etc? What other headaches might I run into?
Thanks
The nvidia-smi approach is the best.
Presuming you want to run compute jobs on the S1070 and not on the Quadro, set the Quadro to compute prohibited, and the S1070 devices to compute exclusive. The driver will then “automagically” give CUDA programs a context on a freee S1070 GPU if there is one, or fail otherwise. If you have a high job throughput, consider using something like SunGrid Engine to schedule, with a consumable resource equal to the number of compute exclusive GPUs. Grid Engine can then hold jobs in its queue until there is a GPU resource free, making sure none of you jobs don’t run because there are no free GPUs.
Using this approach you don’t need to change a line of code or worry about device selection inside your code. The driver takes care of it for you.