PRIME render offloading on Nvidia Optimus

I’d love the offloading, but since it’s a such complex piece of software, PRIME should allow us to select programs that run on the dedicated (NVIDIA) graphics card.

Something similar to the Bumblebee project, like prime-run, or adding more options to the NVIDIA settings application.

That’s exactly how it works already. Just set DRI_PRIME=1 as environment variable for the application and it will use the offload gpu (i.e. the discrete nvidia gpu)

So if I select Intel (aka Power Saving Mode) and then run a command like

DRI_PRIME=1 nvidia-settings

It should run with the discrete NVIDIA gpu, correct?

I tried it but it doesn’t work.

It’s supposed to work like this, but the nvidia drivers don’t support the relevant functionality which is why this thread was started. You can try this behaviour when swapping the nvidia drivers for the free nouveau drivers which are able to do render offloading.

Oh ok, thank you.

It’s a shame, because I try to always use the most recent NVIDIA driver and having to return to nouveau is really not an option.

But having the code on the nouveau side it would be easy to port it to the officials drives correct? Or is there such a major difference in implementation that makes it so hard to port?

I don’t think that’s feasible. nouveau is part of mesa, which does the common stuff like providing an abstraction for kernel APIs such as DRI, but the nvidia driver is a whole different piece of software.

Nevertheless, here is the relevant patch which enabled prime for nouveau: ~airlied/linux - Official DRM kernel tree

As you can see, it was not much code and that happened almost 6 years ago.

Then why doesn’t NVIDIA support this? It’s not much code and it’s been working for 6 years.

I know Linux is a small market, but this small improvement would make the Optimus experience for Linux users a lot simpler.

Any news on this topic?

Honestly, I don’t think any nvidia dev will reply here.
As it seems this topic hasn’t any priority for nvidia.

@aplattner – is this something Nvidia is still planning on accomplishing at some point in time?

@zx2c4 Normally NVIDIA doesn’t announce what their are planning (at least for Linux). We just have to hope that they are working on a solution…

Please any news on this topic? We really need this feature in Linux.

Will this feature be implemented under Wayland?

Fortunately I’m more interested in OpenCL (for image processing) than OpenGL, or I’d be pretty unhappy with myself for buying a high end Skylake laptop with a Quadro M4000M that gets maybe 40% of the glmark2 score that a 7 year old laptop with a first gen Core i7 and an AMD Radeon HD5870 running the free driver gets (and about 60% of what the HD530 in the Xeon E3-1505Mv5 does). I do like the idea of being able to power down a big chunk of the laptop when I don’t need it, but there may well come a time when I want some real graphics punch for something. clpeak at least reports numbers in the right ballpark, but what a headache getting everything set up was.

Bump @aplattner. Please, do you have an update?

How much begging and grovelling does the Linux community have to do?

Offloading - everyone wants the power of the dGPU when necessary (typically power plugged in) and the power and heat savings when the dGPU isn’t needed (typically running the laptop on the battery & running office applications).

Yet another bump and confirmation.

Please Nvidia - us Linux users are people too!

For openSUSE users, I’ve been able to get the suse-prime package working (from repo home:/bosim:/suse-prime; I use the Tumbleweed version). To install the RPM you need to remove the bumblebee stuff, but not really – it’s just a collection of scripts that doesn’t interfere with bumblebee use in Intel mode. The problems I’m still having are:

  1. glmark2 segv’s on startup. That’s a pity; I’d like to see just how fast it is. glxspheres is wildly faster under suse-prime than under either optirun or primusrun (by a factor of 7-10).

  2. I’m still unable to get external displays working.

I recommend not removing bumblebee, in fact; without it, I’m not able to run OpenCL at all (even with suse-prime).

Both of above problems solved:

  1. is a problem with glmark2; it needs to be run as LD_PRELOAD=/lib64/libpthread.so.0 glmark2. This has been reported by others on other systems; I suspect it’s a build problem with glmark2.

  2. was pirate error; it was something in one of the xorg.conf pieces that explicitly turned off external displays.

I’m getting glmark numbers in the 7000 range, vs. 1500-2500 with Optimus and Primus. Interestingly, though, terrain is faster under optimus than “the real thing”.

Dear aplattner,

Would it be possible to get an update on the development of PRIME render offload in the nvidia driver? It has been 2 years since PRIME and PRIME synchronisation for output has been introduced. This works just fine, but it does require you to have your nvidia dGPU running constantly. While nvidia gpu right now can perfectly use runtime-suspend when you unload the nvidia modules. (and older gpu with bbswitch / acpi). This gives power saving, and/or fans that can stop running.

I have just bought a laptop with a 1050Ti MaxQ which works nicely, but not having PRIME offloading is a bit of letdown. I didn’t know about it until after I was installing Linux on it, it was just one of those things I assumed would be working, since offloading sounded like a basic feature to me. Getting the laptop configured so I can use it docked, and also use it on the road with low battery usage is turning out to be bit of a hassle, and thats a bit of a shame.

Now, my current solutions could be:

  1. bumblebee with optirun/primusrun. Drawbacks are loss of fps compared to PRIME. It does not work with Vulkan.
  2. nvidia-xrun to start another Xserver and run your app there. This works with vulkan, but you have to setup another desktop (Perhaps even another WM/DE just for that app). And you have to switch tty.
  3. Nvidia prime switching, which requires you to log out your current session, restart X and log back in again. Which means you have might have to restart all your applications that you had running before.

With the coming of more vulkan games, and DXVK / vkd3d for Wine, Vulkan is becoming more important, or well, is already important. Although it might perhaps be possible to use multigpu with vulkan somehow to get offloading going with a hackish method. Just having PRIME offload capability in the nvidia driver which would be usable for everything would be a so much better solution. Not just for vulkan, but for all laptops with nvidia gpus currently, and in the future.

That it is possible to have PRIME offloading with nvidia, is (I believe) already proven with the open source drivers. So I figure that it should be possible for nvidia devs to implement this as well.

Anyways, could you please update us on the current status? Are you working on PRIME offloading? Is it in the planning? (Are there technical issues preventing you from working on it?)

Since it has been a while since you gave any information on this subject, I hope to hear from you (or anyone else at nvidia that has current information). I thank you in advance for a response,

Dox

Hi folks,

Yes, it’s still being worked on. Kyle laid the groundwork with the server-side vendor-neutral dispatch code that’s in X.Org xserver 1.20. There’s still some more work to be done there and support for it needs to be wired up inside our driver, but basic support for loading NVIDIA’s GLX as a vendor in the server is in place. Kyle is putting together a proposal for the next steps.