Host Thread Multiple Devices

Hi all.

on page # 34 of programing guide there is
3.2.3 Multiple Devices

it talks of “Host Thread”. can any body elaborate what is a Host Thread…


Let us say you write a ‘hello world’ program in C. When you launch it on CPU, it is executed as a thread (a light-weight process) on CPU. This thread is called as a host thread.
In CUDA context, a host thread is the one which calls the global function. :)

so in CUDA context does it mean that if I had to use multiple devices i need multiple Pthreads … and these threads will call own global function ???

That’s true… Please see the attached text file for more information…
This text file gives a scenario of what a typical multi-gpu use-case would look like…
multiple_gpu_case.txt (469 Bytes)

thanks that was very useful.

please tell me what happens in the kernels in different GPUs(devices) do they share their memories. or we need to do some Message Passing ?

Even though I haven’t worked on multi-gpu environments, I believe that what you are saying is true. I don’t think they will be sharing memory. Someone with a multi-gpu programming experience is the best person to answer this…

“do they share their memories.” - The answer is NO. Each GPU has its own memory, not shared with any other GPU. If you want to exchange data, you have to do it via a GPU0-to-CPU-to-GPU1.