Hello,
I’ve been trying to set up past these few days my home server to be able to play games.
I own a Supermicro X11SPG-TF board that has an on board ASPEED GPU with an VGA output.
The GPU accelerator in it is Nvidia Tesla M40 bought for like 200$ from ebay.
CPU: -Not that this is important: Xeon Gold 6138
OS: Windows server 2016 Datacenter
Day 1:
The drivers I got from: Official Drivers | NVIDIA
are complete and utter trash with no options whatsoever but they did work for a while and to some extent.
I’ve actually had success with installing the drivers and making the card appear in Display Adapters, I’ve successfully (took me hours) set the GPU into WDDM from TCC, which was very hard because googling "Tesla" in 2019 will lead to unwanted results, and then for like a few seconds I’ve played a DirectX game. After restarting the server however, the GPU didn’t want to render Steam and DirectX games (I’ve had nvidia-smi -l running in the background). The only thing that would now run on my M40 was OpenGL. I’ve tested OpenGL with Unigine Heaven.
That’s about shortest summary I could give you about day 1.
Day 2:
After reinstalling/installing, removing drivers, reinstalling/installing software like VNC, Teamviewer, Zero-something VPN and all that crap I have lost all my nerves.
X11SPG-TF is a Supermicro server board that takes ages to post and as I was seeing that this led to nowhere I have thought of another approach.
The initial idea to game on my M40 came from me googling "Cloud Gaming" which eventually led me to Parsec which basically offers "Do it your way" cloud gaming setup.
Everyone was using these NV6 Azure servers with the M60 which is essentially the same as my M40.(I think, I’m not actually sure about this one. I’ve come across the term VDI and apparently that’s what my M40 does not have.)
So I started studying how Azure works and it’s basically the same as Hyper-V, so I went ahead and added a Hyper-V server role to my server, set up an Gen 2 VM and suddenly noticed that I needed something called RemoteFX to pass a GPU to the VM, so I went to the server manager features, added that as well, restarted and immediately came across another issue, being the 1024MB VRAM limitation, unbearable lag in even completely simple games and other relishes like that.
I gave up.
Day 3.
Searching the same few forums (eg./r cloudy gamer and this forum) google would constantly spout out I’ve noticed that there is in fact something I am missing, because the Azure servers don’t use trashy RemoteFX to pass their GPU to the VM. So I googled for that and came across many other forums, it was like I was suddenly enlightened.
So skip forward a few hours, I grab a few commands, stick them into powershell and voila! A new unidentified device popped up in the device manager in my VM. "Let’s install drivers!" I thought to myself. So I went to the Nvidia site, downloaded the freshest, newest Cuda 10.1(no clue what this is) drivers, installed them and rebooted the VM.
The VM is running on a 2TB M.2 SSD Raid, so the boot times were incredible. I could try so many thing with this VM and instead of losing hours in reboots I started losing minutes and the will to live.
Once I was back in the OS I ran into another problem. The GPU was detected as Nvidia M40, but had a yellow exclamation mark next to it, meaning something is wrong.
It was a code (12)… "What is a code (12)?" You might think to yourself, well… I’ve no clue either.
So I googled : "Hyper-V VM Tesla M40 Code 12 not enough resources error" I immediately got to enjoy some new Tesla model S autopilot footage.
At this point I stopped googling with the "Tesla" keyword. What each and single article about the error code were telling me was that I should either "uninstall the driver and install it again" which is the first thing even a not tech-savvy person would do, or I had to modify 4G decoding and stuff like that in the BIOS.
I did everything the first 20 articles told me and to my surprise, the error is still there. I have known all along that the issue must be coming from the MMIO allocation or something like that. The article I was following on the MMIO allocation was telling me to set the minimum to 2GB and maximum to the size of my VRAM, so for me that was 12GB. I’ve done that and I’ve even played around with for like an hour and I have tested out every value between 1GB - 3GB on the minimum and 8GB - 13GB on the maximum.
Still no luck, same error.
Day 4.
I was in school this day, so I only had like 12 hours of free time after I came home to do this so I didn’t achieve much.
"What if this assignment wasn’t meant for Tesla GPUs" I thought to myself. So I removed Hyper-V and everything related to it, also reassigning the M40 to the host and continued from there.
What I was trying to do at this point is search in-between the lines of what I’ve been reading since Day 1.
What I ended up doing was basically trying to get the Tesla M40 as a display location on the Generic PnP Monitor. Which I haven’t achieved, internet people tell me it’s not achievable but somehow I’ve had 100FPS, fully utilized Tesla M40 playing ARK: Survival Evolved.
For a few seconds… Game crashed and then back to OpenGL, driver reinstalling etc… Just to see that amazing framerate for a moment.
I wasn’t sure why it kept crashing, like was it a driver issue, does the card overheat, what’s wrong with it? (Just for the record, nvidia-smi was telling me it was only 70C)
So I went to take a look at my machine and what I found out was that the jet propeller fell off from the Tesla. (It was duct-taped to it)
I’ve cleared the crashing issue but now instead of crashing a combination of something I did (quitting rdp, exiting the game and launching it again) made it freak out the same way it was before but now instead of crashing the game or the whole server it simply refused to "render" DirectX programs again.
It was like 2AM by now and I had to wake up at 5AM, so I had to call it quits.
Day 5 (Today)
I took a laptop to school just so I can work on it while I’m bored or something.
Having it on the server like that simply wasn’t doing it and so I’ve started searching more about the reassignment and found out that Microsoft has an official 100x better guide than those I’ve been following.
The issue was the upper MMIO, it had to be set to a ridiculous number in order for it to work. So I reinstalled Hyper-V and everything back into the VM, passed it the GPU, gave it drivers and horraay! No exclamation mark, no nothing. Simply Nvidia Tesla M40 working correctly.
At this point I basically had the same setup as the most people on (/r cloudy gamer) and was able to follow their guide from here.
That didn’t last long and in fact, not even a second. Same issue with the virtual monitor. Everyone (There was even a video) installed the driver, restarted the VM instance and on start had 2 monitors in the device manager, where one was on the M60 and the other on basic microsoft adapter which you had to disable.
So I started searching for this issue and there was a dude, in fact two people following the same guide who ran into the same issue.
What they did was set the Tesla from TCC into WDDM and that fixed it for them, which was the first thing I have learned about these GPUs so it was something I have already attempted.
That’s about it. This is a really watered down story. It took me 5 whole days including this one and me writing this for an hour to get basically nowhere.
I’ve sent an application to the Nvidia Grid 90 days trial to test those grid drivers, since that’s what someone also suggested to me and it’s the only thing I haven’t tried out yet.
Thank you for reading and sorry about my grammar, sentence construction and maybe even spelling.
I’m not a native speaker and it’s like 12AM so it’s hard to keep focus.
Can someone please assist me in this matter? I’m willing to do anything (Besides paying money, since doing that I could simply buy myself gaming hours on Parsec killing the whole purpose of doing it on my own hardware).
Parsec says:
"NVIDIA Tesla, GRID and Quadro
Professional workstation and server graphics cards will work with Parsec provided that they support hardware video encoding (NVIDIA NVENC),support either a physical display or display emulation via EDID, and are running in WDDM mode."
Can I achieve something like that? Even with a completely different VM, I don’t even care at this point. Feel free to suggest me even paid services, but I prefer the Open Source, free ones.