Setting the GPU compute mode to exclusive on Mac OS (10.6.3)

I have a late 2009 MacPro running MacOS 10.6.3. I have two nvidia GPUs, GTX 285 and GT120 (machine default.) I wanted to make the GTX 285 exclusive as I can not run any applications that take more than 10 seconds on that card. I read that on Linux, one needs to use nvidia-smi in background to set the mode. Is there way on MacOS to achieve similar effect?

Any input will be useful.

Thanks.
Rajesh


Device 0: “GeForce GTX 285”
CUDA Driver Version: 3.0
CUDA Runtime Version: 3.0
CUDA Capability Major revision number: 1
CUDA Capability Minor revision number: 3
Total amount of global memory: 1073414144 bytes
Number of multiprocessors: 30
Number of cores: 240
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 16384 bytes
Total number of registers available per block: 16384
Warp size: 32
Maximum number of threads per block: 512
Maximum sizes of each dimension of a block: 512 x 512 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 1
Maximum memory pitch: 2147483647 bytes
Texture alignment: 256 bytes
Clock rate: 1.48 GHz
Concurrent copy and execution: Yes
Run time limit on kernels: Yes
Integrated: No
Support host page-locked memory mapping: Yes
Compute mode: Default (multiple host threads can use this device simultaneously)

Device 1: “GeForce GT 120”
CUDA Driver Version: 3.0
CUDA Runtime Version: 3.0
CUDA Capability Major revision number: 1
CUDA Capability Minor revision number: 1
Total amount of global memory: 536543232 bytes
Number of multiprocessors: 4
Number of cores: 32
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 16384 bytes
Total number of registers available per block: 8192
Warp size: 32
Maximum number of threads per block: 512
Maximum sizes of each dimension of a block: 512 x 512 x 64
Maximum sizes of each dimension of a grid: 65535 x 65535 x 1
Maximum memory pitch: 2147483647 bytes
Texture alignment: 256 bytes
Clock rate: 1.40 GHz
Concurrent copy and execution: Yes
Run time limit on kernels: Yes
Integrated: No
Support host page-locked memory mapping: No
Compute mode: Default (multiple host threads can use this device simultaneously)

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 55683, CUDA Runtime Version = 3.0, NumDevs = 2, Device = GeForce GTX 285, Device = GeForce GT 120

PASSED

You could select the device you want to use on your program itself. Look at the SDK Examples, and --device=x ti select the device you want to use, it will make your software using any device of your choice.

I have pretty much the same setup (GTX 285 and GT120, with the monitor attached to the latter). I have not found a way to disable the run time limit on the GTX 285.

To iAPX: The question is not about how to select the device for computation, but how to disable the kernel execution time limit.

Cheers,

Mark

I have pretty much the same setup (GTX 285 and GT120, with the monitor attached to the latter). I have not found a way to disable the run time limit on the GTX 285.

To iAPX: The question is not about how to select the device for computation, but how to disable the kernel execution time limit.

Cheers,

Mark

I have the exact same problem

Any one found a solution for this?

Thanks

I have the exact same problem

Any one found a solution for this?

Thanks

I know this thread is ancient, but the question is very pertinent to my work.

Has anyone found a way to disable the kernel execution time limit on a Mac (OSX 10.6.8, Quadro FX 4800)?
I’ve been trying to find a way for two days to no avail.

Thanks