Tesla M10 GPU profile general question

I have three Tesla M10 GPU cards. One in each of three Cisco USC hosts. I have been using this profile chart as guidance for deployment: http://imgur.com/a/pVK3Q I’m a bit confused with the chart though. I don’t quite understand the last two colums - Max vGPUs per GPU & Maximum vGPUs per board.

What I’m looking for is the amount of VM’s I can get from each card. Around 60 VM’s will be sharing these 3 cards and I’m curious which profile would be best suited for this.

Any clarification of this chart or the profiles in general would be great.


Link to the chart I’ve been using: http://imgur.com/a/pVK3Q


The M10 (the board) has 4 GPUs on it.

Using a 1GB Profile, (you’ll need a 1GB Profile minimum so you can use NVenc) you can get 8 VMs per GPU and / or 32 VMs per board (8x4=32). If you use a 2GB Profile, you’ll get 16 VMs per board, so you won’t be able to support 20 users per server. 1GB is your only option, unless you use RDS.

As for the best Profile size, that depends on the applications, Operating System, how many monitors and their resolution. Lots of factors to consider. Start with 1GB and see if that’s enough, but as said, if you go any higher, you won’t get 20 (concurrent) users per server.


Thanks for the reply. When you say 1GB and 2GB profile - where is that on the chart or how are you distinguishing that? Sorry for the silly question.

Under the heading "GRID virtual GPU" and also under the heading "Frame Buffer" ;-)

The vGPU profiles allocate "Frame Buffer" in GBs … 0, 1, 2, 4, 8 (0 = 512MB, which is not worth bothering with).

ah ha! thanks again!