Virtual memory/virtual addressing and CUDA

Hi, does any one ever deal with the virtual addressing / memory with CUDA? I will need to deal with large files that couldn’t be fit in th global memory. Here is only post that I found vaguely talking about “virtual addressing” but I couldn’t find it anywhere in the programming guide. (see below) Thanks in Advance. [url=“http://forums.nvidia.com/index.php?showtopic=51053&hl=virtual+addressing”]http://forums.nvidia.com/index.php?showtop...tual+addressing[/url]

There’s no such thing as file mapping in CUDA. If you need to work with large files then you have to upload them partially.

Thanks AndreiB for the reply. External Image When I read wumpus and vpodlozhnyuk ’ post (See link in the first post) , it gave me an impression that CUDA could do virtual Memory somehow - which could save me the trouble to do file mapping.

The Tesla architecture has a 64-bit MMU, and supports ‘virtual memory’ by mapping memory from the CPU into GPU memory space. But this does not include external hardware like harddrives.

I see. Totally make sense. :-) Thanks wumpus!