Should cudaMallocHost() need retry?

Hi,

I’m using CUDA 7.5 on Ubuntu 14.04, and I find when I do cudaMallocHost(), sometimes it will fail (with error code 30, “unknown error”) even though the system still has enough memory (actually way beyond enough, just hundreds of MB allocated out of 100 GB available memory).

Since the problem does not happen all the time, I did a workaround that I added a while loop outside cudaMallocHost() to retry after failure. With those retries, my problem was solved.

However, I’m still a little bit worried about that. Should this be happening at all? Is it appropriate to retry cudaMallocHost()?

Thanks,
Cui

I don’t think it should need a retry. I have done some extended testing of cudaHostAlloc on redhat systems (Fedora/Centos/RHEL) and haven’t witnessed that behavior.

I always use cudaHostAlloc instead of cudaMallocHost, although I have no reason to think that should matter for the described behavior.

Maybe that’s another problem specific to Ubuntu. (Here I would like to cite my other thread https://devtalk.nvidia.com/default/topic/883675/cuda-programming-and-performance/pinned-memory-limit/)

I really wish I could switch to CentOS. However, it’s really hard to get my dependencies on CentOS. Many of the dependencies that I use are not included in yum repositories.

Thanks,
Cui

As far as I know it should not need a retry unless you are actually close to running out of memory, and some memory gets freed up in between the retries.

It seems unlikely with the extreme values you mentioned, but could this be a memory fragmentation problem?

I haven’t looked into this for a long while though, so I am not even sure cudaMallocHost() / cudaHostAlloc() still requires contiguous memory (it seems to me theGPU’s MMU could be able to handle fragmented host allocations but as this is all undocumented I am not quite sure).

Have you checked your max locked memory limit setting with ulimit -a?