System: Win 7 x64
System Memory: 12 GB
CPU: Core i7 920 @ 2.67 GHz
Cuda Device: Quadro FX 5800 and Tesla C1060 each with 4 GB of Memory
Toolkit Version: 3.2
SDK Version: 3.2
C++: MS Visual Studio 2008 SP1
Same configuration also running in Ubuntu 10.04 LTS x64
I need to visualize some big amount of volume data using a 3D texture. The volume right now has a size of 2.5 GB but
there will be more volumes of even bigger size in the near future to be visualized.
Using Ubuntu my application works as expected and i am able to use the full amount of memory the cuda devices provide.
Unfortunately under windows the wddm throws a monkey wrench in my plans…
Using nvidia-smi I was able to activate the compute only mode for the tesla cards and can now allocate the full amount
of memory on these bords. But i really need to be able to also use the full memory size of the quadro card.
Is it possible to bypass wddm on quadro or gforce boards and allocate more memory than wddm allows?
I know it is possible to allocate more memory in smaller pieces (eg. allocate 3 times 1 GB instead of allocating 1 time 3GB which wddm permits…)
so if i need 2.5 GB for the volume data. is it in any way possible to allocate this memory in smaller parts and after that map the texture reference
to this memory parts, having the tex refernce behave as normal?
Is the upper limit of allocating memory exactly the same in all cases (somehow defined in the driver) or is it different, depending on how many
device memory a specific card provides and, if this is the case, is it a fixed percental value or can i somehow check for the max value allowed in my code?
Hopefully someone here can help me out of this situation (and maybe even explain to me what the sense of this restriction of allocating memory is).
Thanks and kind regards,