Job Scheduler/Queue Class for multiple GPUs has anyone implemented one?

Hi guys,

a couple of months ago I stumbled upon a Class or Lib someone posted in this forum that takes care of job scheduling or resembles some sort of job queuing system for multiple devices. I didn’t read it very carefully. I know I should have bookmarked the link, but I didn’t, and know I can’t find it anymore. Tried all the buzzwords in the search field without any success.

Can anyone point me to the right thread or has implemented such a class themselves?

thanks

Probably GPUWorker?

GPUWorker:
[url=“http://forums.nvidia.com/index.php?showtopic=66598&pid=373959&mode=threaded&show=&st=&”]http://forums.nvidia.com/index.php?showtop...aded&show=&st=&[/url]

CUPP is somewhat related:
[url=“http://cupp.gpuified.de/”]http://cupp.gpuified.de/[/url]

Or of you are looking for an application level job scheduler: I use the sun grid engine configured with number of GPUs as a consumable resources. The snag is that there is no way to access which GPU each job has been assigned. For that I’ve written a python script for requesting/reserving GPUs that I’ll publish to the forum once I have time to polish it up.

Thanks for the tip!

Thanks guys! The GPUWorker is exactly what I had in mind.