Preventing gpu calls in gpu idle time.

Hi everyone,

I’d like to ask your help considering the following. If multiple users log in to a (multi-core) system to run a GPU application I would expect it is possible that these applications interfere. Examples you could think of are calling a kernel of program A in between of two kernel calls of program B or worse allocating memory for both programs.

A way of preventing this could be by setting and unsetting an environment variable such that you can check whether executing a GPU application is allowed. A large disadvantage of this method is that if this check is not build in it is obviously useless.

Are there any more robust ways to prevent possible problems?

thanks,

Martijn

there is a problem with concurrent execution of CUDA applications, but they will not crash unless more memory than available is allocated on the device. However, the performance suffers as kernel calls are serialized, and the application may have to wait for the GPU to become available.

You can make sure every user on the machine checks an enviroment variabel before running, and busy loop until it becomes available? But the way the framework works today, just make sure that memory is allocated with CUDA_SAFE_CALL, and the runtime will make sure your applications run correctly.

See the recent thread http://forums.nvidia.com/index.php?showtopic=90728 for one solution.