Multiple GPUs and CPU

Hello Forum,

I have an application that launches a kernel on GPU1. Meanwhile work can be done on the CPU as it waits for the kernel to finish up.

When I add multiple GPUS in the same model, the CPU freezes up until all the kernels complete (cudaThreadsynchronize is called in each thread that manages a GPU). Don’t know why the host is not able to continue while all the GPUs assigned are off doing work. Quite literally the CPU system freezes up entirely (even the mouse pointer is locked).

It seems to be related to having multiple CUDA GPUs and trying to make use of them and the hosting CPU at the same time.

Any clues?

Thank you.

Hello Forum,

I have an application that launches a kernel on GPU1. Meanwhile work can be done on the CPU as it waits for the kernel to finish up.

When I add multiple GPUS in the same model, the CPU freezes up until all the kernels complete (cudaThreadsynchronize is called in each thread that manages a GPU). Don’t know why the host is not able to continue while all the GPUs assigned are off doing work. Quite literally the CPU system freezes up entirely (even the mouse pointer is locked).

It seems to be related to having multiple CUDA GPUs and trying to make use of them and the hosting CPU at the same time.

Any clues?

Thank you.

Your CPU likely IS free. What’s happening is you’re running a CUDA kernel on your display GPU, so the display locks up while it’s computing.
The CPU is still active, and your machine isn’t frozen; it’s just your display.

This is quite annoying of course. The best solution for professional code is to use multi GPU and run CUDA only on dedicated GPUs, not your display card. Many of us have a cheapo “display only” GPU installed for this reason. It also is useful for debugging in Windows.

The other solution is to write your kernels and limit your work per kernel to be a few milliseconds (at most say 50 ms), allowing your display to at least update. If you’re writing consumer apps, you pretty much have to do this since your target market mostly has only one GPU in their machine.

Your CPU likely IS free. What’s happening is you’re running a CUDA kernel on your display GPU, so the display locks up while it’s computing.
The CPU is still active, and your machine isn’t frozen; it’s just your display.

This is quite annoying of course. The best solution for professional code is to use multi GPU and run CUDA only on dedicated GPUs, not your display card. Many of us have a cheapo “display only” GPU installed for this reason. It also is useful for debugging in Windows.

The other solution is to write your kernels and limit your work per kernel to be a few milliseconds (at most say 50 ms), allowing your display to at least update. If you’re writing consumer apps, you pretty much have to do this since your target market mostly has only one GPU in their machine.

@Worley… I thought the same thing when he said the computer “literally” locks up(mouse pointer and all)…

@NeedWisdom… Heres some wisdom… get an 8400gs or similar for $30. Stick it in your x1 slot if you are out of x16s. Will work great for display…

Cheers!
Debdatta Basu.

Thank you all very much for the responses but I don’t think that’s the problem for the following reasons:

  • I have a TESLA GPU + a dual Quadro NVS 420 which are my cheapo display GPUs. When I target any of these individually things work fine. It’s when I try to orchestrate all simultaneously that the machine locks up. So I would expect that the display GPUs would seize up the machine whether or not I use the TESLA (the dedicated card)
  • If it were just the display seizing up but the CPU is busy humming along, I would see indication that some progress was made during the display’s lock up. Instead, the CPU has accomplished nothing during the seize up–so I don’t think it is that.

The point is taken that I should have all dedicated GPUs and leave my display units alone. I don’t think that can hurt.

Any other ideas would be appreciated. Thanks again.

My bad.

Yep–I think you folks are correct. I did some experiments and it looks like my display GPU is the culprit.

Thanks again.

Would the following setup work for you?

nvidia-xconfig  \
  --allow-glx-with-composite   \
  -composite --dynamic-twinview   \
  -a   \
  -logo   \
  --multigpu=Auto   \
  --render-accel   \
  --no-power-connector-check