Why do I use mpi for multi-card communication in a function is much slower than using mpi for multi-card communication directly in the main function, the following is the code of mpi multi-card communication
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Strange performance across CUDA versions | 3 | 440 | October 12, 2021 | |
MPI+CUDA Testing the capabilities of different cards | 0 | 5089 | December 9, 2010 | |
Unusually slow MPI communication between GPUs | 1 | 515 | September 5, 2023 | |
Multi-GPU card and physically shared memory | 0 | 6190 | December 5, 2010 | |
Why my CUDA code is slower than CPU code? | 6 | 3297 | July 20, 2011 | |
Mpi+cuda fortran | 1 | 1024 | April 5, 2022 | |
Comparing Cuda 9.2 to Cuda 10.2 | 0 | 399 | May 27, 2020 | |
Why is the transfer rate on gpu so much slower than on cpu when executing send and recv | 1 | 390 | March 7, 2023 | |
CUDA+MPI = Unexplained Issues... Random Crashes, Errenous Output?!? | 5 | 3258 | July 7, 2008 | |
cluster upgrade makes multi gpu code slower | 1 | 1763 | July 12, 2011 |