simpleMultiGPU from the SDK

I have two questions about the simpleMultiGPU example from the SDK.

  1. Does the following code launch the GPUs in parallel or in series?

  2. If I have, say 2 GPUs and 2 CPUs on the same machine, does the code automatically use both CPUs (one for each GPU) or just use a single CPU to handle both GPUs?

Thanks in advance for any suggestions.

Runs them in parallel - this is why its called Multi GPU Sample :)

The cutStartThread opens a HOST (CPU) thread so the loop will create 2 HOST threads. Usually each will run

on a different CPU core (i.e. 2 cores out of 4 if you have a single socket quad core machine).

eyal

Thank you very much for the help. Do you know of any documentation that discusses “cutStartThread”, etc? I could not find it in the CUDA programming manual and would like a systematic intro of how NVIDIA’s multi-GPU model works.

It is very simple - there is no multi GPU model. If you want multi-gpu, you will have to write your own implementation. That cutStartThread function you are asking about is just some basic host threading someone hacked together for the purposes of demo programs in the SDK, which is why there is no documentation for it. You most definitely do not want to use it (or any other part of the cutil library for that matter) in production code.

Try GPUWorker library from MisterAnderson (google this newsgroup) or just put together something of your own

using the “regular” Windows/Linux threads API such as CreateThread (windows) or pthread (linux)

eyal

Thanks for the suggestions!