NVIDIA’s 364.12 GPU driver release series brings together a lot of technologies that users and developers may be interested in:
- Support for Vulkan
- OpenGL Vendor-Neutral Dispatch ( GitHub - NVIDIA/libglvnd: The GL Vendor-Neutral Dispatch library )
- Support for DRM KMS
- Many new EGL extensions
First, 364.12 is the first mainline NVIDIA GPU driver release to include Vulkan support. We’re fully commited to supporting all of OpenGL, GLX, EGL, and Vulkan. But, Vulkan is definitely the most forward-looking. The full 1.0 spec, including the Window System Integration (WSI) extensions, is here:
https://www.khronos.org/registry/vulkan/specs/1.0-wsi_extensions/pdf/vkspec.pdf
It is worth noting that Vulkan defines window system bindings itself (see, e.g., VK_KHR_xcb_surface, VK_KHR_xlib_surface, and VK_KHR_wayland_surface) and thus is independent of GLX or EGL. The Vulkan WSI extensions VK_KHR_display and VK_KHR_display_swapchain define how to present in the absense of a window system, which is interesting in the context of the following sections.
In 364.12, NVIDIA’s Vulkan driver supports VK_KHR_xcb_surface and VK_KHR_xlib_surface, though not yet VK_KHR_wayland_surface or VK_KHR_display or VK_KHR_display_swapchain. Those are still in development.
Next, OpenGL Vendor-Neutral Dispatch (GLVND) is important for two major reasons:
(1) It redefines the Linux OpenGL ABI in such a way that multiple OpenGL implementations can cleanly coexist on the file system: over time, this should put an end to the age-old Linux libGL.so collision problems.
(2) It cleanly defines what symbols should be exported by each library, in order to use EGL with full OpenGL, rather than just EGL + OpenGL ES. Using EGL with full OpenGL on Linux isn’t new, but the GLVND division of libOpenGL.so for OpenGL symbols, libGLX.so for GLX symbols, and libEGL.so for EGL symbols is nice. The sample code referenced below links against libEGL.so and libOpenGL.so.
I gave a GLVND talk at XDC 2013:
http://www.x.org/wiki/Events/XDC2013/XDC2013AndyRitgerVendorNeutralOpenGL/
And a status report at XDC 2014:
http://www.x.org/wiki/Events/XDC2014/XDC2014RitgerGLABI/
The GLVND implementation is maturing. We shipped it experimentally starting in 361.16, and enabled it by default in 364.12 (it can still be disabled at install time, if desired). There has been a lot of interest, feedback, and contributions from Mesa developers and distribution packagers. Thanks! Based on recent feedback, we’re about to make an ABI-breaking change to GLVND, which will hopefully make future ABI compatibility easier to manage. Distros should probably hold off on packaging the upstream GLVND until after that ABI change has settled.
There are a lot more GLVND packaging details here:
https://devtalk.nvidia.com/default/topic/915640/unix-graphics-announcements-and-news/multiple-glx-client-libraries-in-the-nvidia-linux-driver-installer-package/
After the next round of GLVND ABI issues are ironed out, we hope distros will start packaging GLVND. The NVIDIA driver .run installer will install its own copy of GLVND, if it doesn’t detect a distro-provided copy on the filesystem.
GLVND source is here:
https://github.com/NVIDIA/libglvnd
If you want to participate in discussion around it, there is some discussion in the github “issues” link on that page, and other discussions have taken place on the mesa-dev mailing list.
Lastly, in 364.12 we are finally providing DRM KMS support.
Our display programming support is centralized in a kernel module named nvidia-modeset.ko. Traditional display interactions (X11 modesets, OpenGL SwapBuffers, VDPAU presentation, SLI, stereo, framelock, gsync, etc) initiate from our various user-mode driver components and flow to nvidia-modeset.ko. This has been shipping since 358.09.
New in 364.12, we’ve added a kernel module named nvidia-drm.ko which registers as a DRM driver. It provides GEM and PRIME DRM capabilities, to support graphics display offload on optimus notebooks. It also, on new enough kernels (>= Linux kernel 4.1 with CONFIG_DRM and CONFIG_DRM_KMS_HELPER), provides MODESET and ATOMIC DRM capabilities to support atomic DRM KMS.
The DRM KMS support in nvidia-drm.ko is still unproven, and has some interaction issues with SLI, so it is disabled by default. You can enable it with nvidia-drm.ko’s “modeset” kernel module parameter. E.g.,
modprobe -r nvidia-drm ; modprobe nvidia-drm modeset=1
This much should be sufficient for simple DRM KMS clients that use the “dumb buffer” mechanism for creating and producing surfaces to present through DRM KMS (DRM_IOCTL_MODE_{CREATE_DUMB,MAP_DUMB,DESTROY_DUMB}), such as the xf86-video-modesetting X driver and boot splashscreen managers like Plymouth.
However, more sophisticated DRM KMS clients in the Linux ecosystem, such as most Wayland compositors, currently use gbm to allocate and manage graphics buffers. We do not currently provide a gbm backend driver as part of NVIDIA’s GPU driver package. To ease migration of the existing ecosystem, that is something we’re exploring for a future release.
But, really, we feel that gbm isn’t quite the right API for applications to express their surface presentation requests. At XDC 2014, I made the case for a family of EGLStreams-based EGL extensions to be used instead:
http://www.x.org/wiki/Events/XDC2014/XDC2014RitgerEGLNonMesa/
The concept is that an application creates an EGL object, an EGLOutputLayer, that corresponds to a specific DRM KMS plane. Then, the application creates an EGLStream where the stream’s producer is an EGLSurface and the stream’s consumer is the EGLOutputLayer. Calling eglSwapBuffers() on the EGLSurface presents the content from the EGLSurface to the DRM KMS plane.
There are several nice properties of this approach:
- EGLStreams have explicit producers and consumers.
** If the driver knows exactly how a buffer will be used, it can select the optimal memory format and auxiliary resources that best suit the needs of the specified producer and consumer.
** Otherwise, the driver may have to assume the least common denominator of all possible producers and consumers.
- EGLStreams have explicit transition points between producer’s production and consumer’s consumption.
** When the driver knows exactly when a surface is being handed off from the producer to the consumer, the driver can resolve any synchronization or coherency requirements.
** As an example, NVIDIA GPUs use color compression to reduce memory bandwidth usage (this is particularly important on Tegra). The 3D engine understands color compression but display does not. We need to decompress using the 3D engine before handing off the surface to display, but decompression is expensive, so we only want to do it when necessary. E.g., it would be wasteful and unnecessary to decompress if the consumer were texture, rather than display.
- EGLStreams encapsulate details that may differ between GPU vendors or GPU generations.
** E.g., When performing multisampled rendering, with NVIDIA GPUs, we can downsample the multisampled rendering using the 3D engine or the display engine. If presentation from rendering through display is encapsulated within an API, then the driver implementation has the flexibility to take advantage of downsample-on-scanout when possible.
This family of EGLStreams-based EGL extensions are implemented in 364.12. Here is an example of how to use them for presentation:
https://github.com/aritger/eglstreams-kms-example
We have also posted Weston patches to the wayland-devel mailing list, to demonstrate how a Wayland compositor could take advantage of this:
https://lists.freedesktop.org/archives/wayland-devel/2016-March/027547.html
I should also acknowledge that the current EGL extensions are not a perfect solution, yet: an EGLstream targets a DRM-KMS plane as its consumer, but there currently isn’t an EGL specification way for all the DRM-KMS planes to consume from their respective EGLStreams atomically. This certainly needs to be addressed, but for all the reasons described above, we feel an EGLstream-based approach is the right trajectory.
For what it is worth, the sort of explicitness of EGLStreams is also the direction taken in Vulkan: the VK_KHR_display and VK_KHR_display_swapchain extensions allow applications to create surfaces associated with specific display planes, and queue swaps to them. The graphics driver therefore has knowledge of how the surface is going to be used at surface allocation time, and the graphics driver is in the call chain when the surface is enqueued to be displayed.
Anyway, for users interested in running a Wayland compositor on top of NVIDIA’s Linux driver:
- Install 364.12 on a recent distro with DRM Atomic KMS.
- Enable NVIDIA’s DRM KMS with the “modeset” nvidia-drm.ko kernel module parameter.
- Build and run Weston with the patches we post to the wayland-devel mailing list.
I should note that Wayland clients shouldn’t require any modification for gbm vs EGLStreams: the difference between the two approaches should only affect the Wayland compositor implementation and the EGL driver.
Going forward, our hope is that:
- We can have some discussion with the DRM community about how to better incorporate EGLStreams into atomic KMS.
- Other EGL implementors will consider implementing EGLStreams and friends.
- Wayland compositor authors will consider adding a path for EGLStreams-based presentation, using eglstreams-kms-example and/or our Weston patches as example.