cuda + openmpi immediate segmentation fault

I am running Ubuntu 8.04 with the CUDA 2.0 toolkit and driver version 177.73 with openMPI. With this configuration, everything works fine and I am able to compile and execute mpi code by simply replace g++/gcc with mpic++ in common.mk.

My issue is that when I try to upgrade my driver to version 180.22 (to get support for my new 295 cards) I get immediate segmentation fault with even the most trivial programs (empty int main). This problem happens only when I am compiling with the CUDA template. Other programs which are compiled with only the mpic++ command line run fine and when I go back to driver v177.73, everything works again. This issue occurs with nearly identical software config on 5 different workstation with different mobo/CPU, chipset, and graphics cards.

has anyone had this issue in the past… I suspect that there may be a compiler flag that I can pass to fix this issue, but that is way outside my pay rate. I have found that things compile and run if I switch to mpich and the mpicc wrapper.

From the release notes:

o Some MPI implementations add the current working directory to the $PATH
silently, which can trigger a segmentation fault in the CUDA driver if you
do not normally already have “.” in your $PATH. The executable must be in
your path to avoid this error. The best solution is to specify the
executable to run using an absolute path or a relative path that at minimum
includes ./ in front of it.

Examples: mpirun -np 2 $PWD/a.out
mpirun -np 2 ./a.out

mfatica, thanks for the prompt response, your encyclopedic knowledge of the cuda documentation is impressive.

Unfortunately, these suggestions don’t seem to affect the problem. On the other hand, I have stopped caring since driver v180.51 does not have this same issue, and so my problems are resolved.

Thanks again… you have given me one more reason to stick with the green team.