CUDA on retina macbook pro GeForce GT 650M

I have run into some issues trying to get CUDA running reliably on this setup.

I keep getting out of memory error trying to run the examples after my computer is running for a little while.

It appears that OSX grabs up all available video memory for it’s own use. Is there any way to force OSX to give some memory back or limit the amount of memory the OS uses on the video card?

Or any other solutions anyone has found.
For instance when I first start my computer I can run the bandwidthTest with no issue.
However after some time I get the following error:
CUDA error at bandwidthTest.cu:719 code=2(cudaErrorMemoryAllocation) “cudaEventCreate(&start)”

I confirmed via my own test that is an out of memory error.

Also an interesting thing I have found is that when I print the “available memory” when I have memory I get a sensical number like 112435200 but when I am getting the out of memory error I get something huge like 140735467355095.

Any help is appreciated.

Thanks,
Mike

I have the same setup, and also discovered that 1 GB of device memory is barely adequate for CUDA + OS X desktop apps.

I find it useful to install gfxCardStatus ([url]http://gfx.io/[/url]) and then making sure the GPU is set to “Dynamic Switching”. This makes it easy to spot which application might be eating up GPU memory with the gfxCardStatus menu bar widget, and the dynamic switching helps encourage OS X to move application data to the internal GPU when possible.

For me, in some cases, it works if I close the lid and then open it again in a few seconds - CUDA becomes able to allocate memory again.

hmm yes I was afraid of this. Too bad there isn’t a way to prevent OSX from using all of our memory. Thanks for the replies.

Also, regarding the nonsensical free memory values, I suspect the problem might be caused by the CUDA context failing to initialize completely due to lack of memory. I would check the return value from that function to make sure it executed successfully.