Running two instances of MATLAB calling mex(dll) files which use different GPUs on the same PC

Ok, this problem could the fault of MATLAB or Windows, but I thought to ask here before I start bugging the Mathworks guys;

My colleague has a CUDA Monte Carlo mex dll for MATLAB which does take in as a parameter the device # to use (which is set in the dll as a cudaSetDevice()).

In the past she was running on a Windows 7 machine with a Titan which was also connected to the display. This particular configuration worked just fine, other than a little bit of OS lag when a kernel was running.

Now she has access to a Windows 7 machine with two GTX 980 GPUs, one connected to the display and one not. The WDDM timeout is not set.

The goal is to get two concurrent running simulations via two separate MATLAB instances each using a different GPU.

Upon start of both instances it would run engaging both GPUs, but after some time it seemed to only want to run one GPU at a time with the other instance stuck seemingly at the point where it gets to a cudaSetDevice()

Even if the instances were not necessarily trying to access the same device, it appears that is would wait until the other instance’s batch of work ended before its instance could start up again. Almost like it was it became a queue.

If we just run one instance on either GPU there are no issues.

An additional note would be that the MATLAB side of the implementation does batch the simulations into smaller groups, so at most maybe a simulation would take 20 seconds to finish before the next starts.

Just started trying to figure out this problem, which is more complicated because of potential interference from MATLAB or the OS.

Looking over the CUDA runtime API, I wonder if there is some property flag I could either solve this issue, or provide more information about the nature of the problem.

Would using the function cudaSetDeviceFlags with a setting such as ‘CudaDeviceScheduleSpin’ or ‘cudaDeviceScheduleBlockingSync’ help exert more control over the situation?

Any ideas of how I could either debug or solve this this problem?

If the application creates a context on both GPUs (intentionally or inadvertently) then the applications usage of the GPUs will serialize, even if one instance is using one GPU and the other instance is using the other GPU.

You could try using the CUDA_VISIBLE_DEVICES environment variable in each host process before you launch matlab or whatever, so that each process will be restricted to a single GPU. This will prevent it from creating a context on both GPUs. In this case, you would pass cudaSetDevice(0) to each instance, since the environment variable affects enumeration of devices.

Thanks for the answer.

In the past I have just manually set environment variables when needed from the desktop.

Since this is the first time I have to do this for a dll called by MATLAB, and I am bit unclear on how to pull this off;

Do I need to have to separate compiled versions of the sample dll application, with the only change being the code setting that environment variable, then start two MATLAB instances each calling a different application?

or do I need to set it twice from the desktop via System Properties before starting each instance of MATLAB?

Currently in the system properties environment variable list that value is not shown.

The OS is windows 7 64 bit

First of all, I haven’t actually done this with 2 matlab DLL’s, so it’s just speculation. I don’t really know what is causing the behavior you are seeing.

I don’t think it really matters whether you set the environment variable via the application or via the “environment”, but it has to be in-place before the CUDA runtime gets initialized. If it were me, I would try setting it from the environment.

It took some trial and error but so far your suggestion seems to be working. Will be running both GTX 980s all night via two MATLAB instances and see what awaits come morning.

Thanks.