Segmentation Fault / Memory error on cuda program exit

I am trying to build and run a cuda enabled version of gromacs on our sles10 64bit linux system ( The cuda part of the program (mdrun) seems to execute successfully however when the main function tries to exit with a “return 0” the program crashes with either a segmentation fault (on one build) or a memory dump (on a different version). The following is the error reported with the memory dump:

*** glibc detected *** /opt/cus/gromacs/cuda/gromacs-4.0.5/bin/mdrun_cuda_d: corrupted double-linked list: 0x00000000008a7520 ***

When I run this in a debugger the stack says that it is exiting while trying to free some memory in the cudart library in both cuda2.2 and cuda2.3. I posted a similar question on the openmm/gromacs forums:

I am able to successfully run other cuda code on our system without problems. At this point I am not sure if this is a gromacs/openmm specific problem or if the error is related to my sles10 version of cudart.

Has anyone seen a similar problem? Any suggestions on how I should proceed in debugging the problem?

Thank you,


Dr. Dirk Joel Luchini Colbry
Research Specialist
Institute for Cyber-Enabled Research (iCER)
Michigan State University