Hi, just an general question about Tesla architecture. Tesla architecture uses 64 bit MMU to map memory from CPU to GPU memory space. Does that mean the memory on GPU also take the CPU memory space? Will the well -known “3 GB” ish problem of 32 bit OS (XP or vista) bother the GPU memory? For example, if I have a 32 bit XP with 1.5 G Tesla card, will we only be able to have “1.5 Gb” to use on the CPU memory because Tesla takes the another half 1.5 G ? Just discovered the new Tesla will be released this fall which has 4 GB global memory and have to decide whether this Tesla card should be used for our 32 bit application. Thanks advance for any replies.
Well, I have also read that it is possible for the GPU to map CPU memory from the GPU, using the MMU. But I don’t know of any way to actually do this in CUDA. So there is as far as I know no way to use more than the 1.5 Gb of your Tesla in CUDA.
What I also have read is that people who have more than 4 Gb of system + GPU RAM (so 3 Gb of system RAM & 1.5 Gb of GPU RAM e.g.) need a 64 bit OS, as far as I remember reading it does not work in a 32 bit OS.
I don’t know if the T10P works in a 32bit OS, I use it in 64bit linux, but given the above I doubt it will work in a 32bit OS. Nevertheless a 32 bit application might be able to use it when run in a 64 bit OS.
I run cuda on a mac with dual cores and 16GB of ram + a GT8800 and it seems to work fine. My stuff is dead slow, but thats another issue.
Now the code is compiled as 32bit (some nvidia limitation under osx) but the os is 64bit. So not sure what cause the problems.
So 32bit program on 64bit OS works when your total amount of memory is >4 Gb. So that answers the questions :)
Usually only up to 256Mb of card memory is directly mapped into the CPU physical address space. The rest is accessed via DMA. Also, the CUDA API uses 64 bit numbers internally on any platform. So at least should not be a problem in using a 4Gb graphics card in a 32-bit system. There might be other problems though, don’'t take my word for it.