Thrust running out of memory crash?

I’ve heard that thrust allocates some GPU memory internally when you run a thrust function, but what is supposed to happen when the GPU is out of memory? From what I can find online it should be sending some kind of error message, but in my case it just crashes outright.

I’ve filled up the GPU with valid data, verified by the result of cudaMemcpy and checking for CUDA errors, and then I call “thrust::unique_by_key_copy”. Inside this call, on Windows, the program crashes with the below error. It seems thrust is using cudaMemcpy but not checking the result.

Is there perhaps a way to pre-allocate memory for thrust that it will need? This may also improve performance since I’ll be calling this function many times.

This application has requested the Runtime to terminate it in an unusual way.
Please contact the application’s support team for more information.

cuda-memcheck says:

========= Program hit error 2 on CUDA API call to cudaMalloc
========= Saved host backtrace up to driver entry point at error
========= Host Frame:C:\Windows\system32\nvcuda.dll (cuD3D11CtxCreate + 0x107709) [0x1294c9]
========= Host Frame:C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\bin\cudart32_50_27.dll (cudaMalloc + 0x
276) [0x242d6]

========= ERROR SUMMARY: 1 error