I have a Tesla C1060 installed along with GeForce 880 GT. Is there a way to force all code to be executed on Tesla instead of GeForce? I can currently force by setting to use dev=1 in cudaSetDevice.
I just want to use the Geforce card for display only.
Interesting … I wish CUDA API could auto select the “best” device in case of multiple devices in the system. If a particular device is already selected as the primary display, use the other one. If there is a Tesla device, use that because Tesla wont handle a display anyway.
Not yet, although we’ve talked about things like that. Watchdog timer status is now a pollable device flag, although we had to cut that from the beta due to time constraints (but it will be in 2.1 final). So don’t worry, people with two GPUs who assumed that device 0 was the dedicated compute device in XP (or vice-versa in Linux), we’re not ruining your day.
Because nobody should ever rely on device ordering for anything. We say, many times, “do not depend on device ordering for anything because it is in no way fixed.” People still do! Keep in mind that you can make very good arguments that this new ordering (or really, any particular ordering scheme you choose) is still wrong in some way, so you should be polling the available devices’ properties and picking whichever device fits whatever criteria you have. If you need more flags to poll, let us know so we can try to implement those. And if you have an app that doesn’t let you choose the GPU, either fix it or start complaining to the app developer.