CUDA using Multiple devices

I have 2 Quadro fx 5600 devices in my system. Can I use both of them together as 1 GPU device to compute my code. Will it enhance my code’s performance???

You can’t couple or virtualize multiple GPUs into a single device for CUDA. If you want to use two GPUs simultaneously to process a single CUDA workload, you will have to manage it in your user space code using host threading or some interprocess communication mechanism like MPI with a separate CUDA context for each GPU.

Thanks a ton fr that reply…

but can u please brief MPI a little more… Is it something about deviding the code in parts or to be more clear suppose if i am having two kernels, then running each kernel in different gpu… what if i have just one kernel in my code??? Can i assign half the number of blocks to each gpu in case i have 2 gpu???

If you have a single kernel, then the way utilize two GPUs is to break the workload in half on the host, and have each GPU run its half concurrently with the other GPU. When they are done, combine the results on the host. How you implement that is completely problem specific. In any case, you must have one CPU thread or process per GPU context. The most popular way to do that is to use threading or MPI.

“In any case, you must have one CPU thread or process per GPU context. The most popular way to do that is to use threading or MPI.”

can u please tell about this “threading” a little more??? a small example will do much good… tnx in advance

You can see a complete master-slave model mutli GPU implementation using Boost threads here:
http://forums.nvidia.com/lofiversion/index.php?t66598.html