NVIDIA Tesla P40 dynamic GPU memory allocation possible?

Hello there

is the GPU memory of a P40 fix allocated by a VM or is this dynamic? p40 has 24GB gpu ram

so let’s take following profile
p40-1q
1024 mb framebuffer
2 virtual display heads
max resolution 4096x2160
max vgpus per gpu: 24.

so maximum 24 vms can run together on this GPU or is overprovisioning possible?

FB allocation is always fix so you cannot overprovision FB. That said you calculation is correct.
24x1Q profile is the maximum possible on P40.

Regards

Simon

thank you very much for your fast respond! that helped me much!

can you explain to me what is the bar1 mem then?

what i found from
http://developer.download.nvidia.com/compute/cuda/6_0/rel/gdk/nvidia-smi.331.38.pdf

"BAR1 Memory Usage
BAR1 is used to map the FB (device memory) so that it can be directly accessed by the CPU or by 3rd
party devices (peer-to-peer on the PCIe bus)."

but i don’t get this one… what’s the thing with directly acces by the CPU?

and how much tesla p40 can you put in one server with 4 PCI slots? 2 or 4?

from datasheet:
–> Form Factor: PCIe 3.0 Dual Slot (rack servers)

Depends on the server hardware. Most vendors support up to 3 P40s in their current 2HE hosts.
See our HCL: http://www.nvidia.com/object/grid-certified-servers.html

is it possible to run 1xp40 and 2xp4 in only one server at once?

Technically this might be possible but OEMs won’t support different Tesla boards in the same server. Therefore you should use the same boards for a specific server.