CUDA: OUT OF MEMORY

Hi,

  I was trying to used CUDA on two algrithms, which basicly do the same thing. The difference is that one of them using O(n) space and the other using O(n^2). I want to compare the speed between the two algrithm on large data set. But the CUDA simply gives out of memory when running out of GPU memory. 

 Is there a way to let CUDA use CPU memory as an extension while the GPU memory is out?


  I am using CUDA 3.1 with VS2008,NV GT330m

Thanks~

No, there is not. To run on with large datasets on a single GPU you can use a Tesla or Quadro series GPU which comes with significantly more memory, or you can break your kernel call into muliple phases and copy memory between the host and device on each step.

No, there is not. To run on with large datasets on a single GPU you can use a Tesla or Quadro series GPU which comes with significantly more memory, or you can break your kernel call into muliple phases and copy memory between the host and device on each step.

Couldn’t you use mapped memory? I forget the syntax, but I presume that if the initial pointer is allocated via cudaHostAlloc with cudaHostAllocMapped set, then the GPU could access as much memory as could be pinned on the host (or up to the limits of 32-bit, at least). This would be regardless of the actual amount of RAM on the GPU itself. Of course, performance would be terrible if the array was accessed a lot.

Couldn’t you use mapped memory? I forget the syntax, but I presume that if the initial pointer is allocated via cudaHostAlloc with cudaHostAllocMapped set, then the GPU could access as much memory as could be pinned on the host (or up to the limits of 32-bit, at least). This would be regardless of the actual amount of RAM on the GPU itself. Of course, performance would be terrible if the array was accessed a lot.