I have 2 machines and each machine have nvidia cuda enable GPU. 2 machines are connected using 100Mbps cable and I want to know that does cuda provide facilities to run same kernel in the 2 machines (inside gpu) symultaniously.
tmurray
2
uh, no, there are no CUDA-specific features to do that, but you can certainly use MPI and CUDA from the same app.
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Can two kernels form two distinct applications run on two GPU cards simutaneously? | 1 | 763 | April 7, 2013 | |
| How to make use of multiple graphic cards ? Is it possible ? | 1 | 2847 | June 12, 2008 | |
| CUDA using Multiple devices | 5 | 3419 | June 22, 2009 | |
| Two different cards in parallel with Cuda | 1 | 1588 | November 2, 2008 | |
| the possibility of two CUDA program run in a GPU | 1 | 939 | November 29, 2011 | |
| Using more than 1 CUDA card at a time. Physics simulations flat out flying on GPU | 12 | 12709 | March 12, 2010 | |
| the possibility of two CUDA program run in a GPU | 0 | 3534 | November 29, 2011 | |
| Kernels launch - parallel or serial? | 16 | 7155 | January 11, 2010 | |
| Example on MPI + CUDA on Two CPU and Two GPU node | 1 | 4033 | September 7, 2011 | |
| how to run two __global__ funtions simultaneously in two CUDA devices | 0 | 1449 | March 28, 2012 |