Any solutions for Maxmize GPU(s) utilization on multiprocess system?

Thx

In our system (Linux), many process use GPU(1-3 Tesla) with Mathinelaerning-classify. And we care real-time. I want to design a mechanism that schedule the priority of process’s GPU request(compute job,exmp: surounding-env detect, cv-detection,segmention process,classify process,etc). But after soming reading and coding, I realize this solution might something wrong.

any suggestion? ;(

there are plenty of opensource and commercial job schedulers that can do something like this:

torque-maui
torque-moab
LSF
slurm
mesos

to name just a few. A number of them are GPU-aware, so they have built-in capabilities to manage GPUs as an explicit resource.

I used LSF from around the time it just came out (1993, created by a startup called Platform Computing out of Toronto) to about 2000 and was an administrator of a “farm” of about 100 machines for a while. Already at that time it was a very sophisticated tool with tons of configurability, and it came with excellent customer support. Nowadays the product appears to be owned by IBM. It is commercial software, so there will be a cost factor to consider. LSF does seem to offer GPU support:

https://developer.nvidia.com/platform-computing-platform-lsf

I am having trouble accessing the relevant IBM product pages this morning (time out).