PRIME offloading: Unable to run chrome

I’m using Fedora 30 with the negativo17 repo. Installed the latest 435.21 + xorg patches and it seems to work on my ASUS UX501VW (Intel HD 530 + NVidia GTX 960M).
I tried to run chrome with the NVidia card as follows:


but I keep getting the following error:

[844:844:0912/] XGetWindowAttributes failed for window 113246212.
[844:844:0912/] Failed to get GLXConfig
[844:844:0912/] gl::GLContext::CreateOffscreenGLSurface failed
[844:844:0912/] Could not create surface for info collection.
[844:844:0912/] gpu::CollectGraphicsInfo failed.
[844:844:0912/] Exiting GPU process due to errors during initialization
[1:1:0912/] FontService unique font name matching request did not receive a response.
[1:1:0912/] FontService unique font name matching request did not receive a response.

Chrome will open, but there’s no hardware acceleration (about://gpu) and it’s falling back to a software renderer (OpenGL ES 3.0 SwiftShader
Are there any flags or something I need in order to be able to run the browser with proper acceleration provided by the NVidia card? is it even supposed to be able to run?

Any help, would be much appreciated!

Chrome gets confused about the second gpu, I guess. It works when using


I have tried that before, and it reports hardware acceleration, but that’s because it actually falls back to using the intel gfx and not nvidia (GL_RENDERER: Mesa DRI Intel® HD Graphics 530 (Skylake GT2)). Are you sure you’re getting it to run with the nvidia gpu?

You’re actually right, nvidia_only is only valid in the Vulkan context.

I noticed this too, but haven’t had a chance to dig into it much. I suspect Chrome is getting confused by the GLX fbconfigs being presented in render offload mode. They’re slightly different from what you see in non-offload mode because of limitations in how the driver is able to map NVIDIA fbconfigs to X11 Visuals on the target screen.

I suspect Chrome is getting confused by the backwards way it creates a window and then tries to find an fbconfig. There’s a comment about it in the source code here, and I have a feeling that doing what the comment suggests would fix the problem:

// TODO(kbr): this is not a reliable code path. On platforms which
    // support it, we should use glXChooseFBConfig in the browser
    // process to choose the FBConfig and from there the X Visual to
    // use when creating the window in the first place. Then we can
    // pass that FBConfig down rather than attempting to reconstitute
    // it.

Thank you Aaron, that indeed seems to be the case.
So I guess it’s something that needs to be fixed by the chromium team. Hopefully, this is the only application causing this problem and they’ll fix it soon. This is going to be a problem for any electron/nwjs app that wants/needs access to the NVidia GPU and provide better acceleration.

I can confirm that it is not working for Firefox too.

I looked into the Firefox source a bit and it turns out that the test it uses to decide whether to enable WebGL is failing because it looks for a single-buffered GLX visual, and we currently only report double-buffered ones. You can work around the problem by going into about:config and setting webgl.force-enabled to true.

The problematic check is in toolkit/xre/glxtest.cpp:

///// Get a visual /////
  int attribs[] = {GLX_RGBA, GLX_RED_SIZE,  1, GLX_GREEN_SIZE,
                   1,        GLX_BLUE_SIZE, 1, None};
  XVisualInfo* vInfo = glXChooseVisual(dpy, DefaultScreen(dpy), attribs);
  if (!vInfo) fatal_error("No visuals found");

Note that if you don’t specify GLX_DOUBLEBUFFER, then glXChooseVisual will only return single-buffered visuals.

The actual code to create a WebGL context in gfx/gl/GLContextProviderGLX.cpp uses glXChooseFBConfig in a way that allows it to choose a double-buffered fbconfig, so it works fine if you bypass the glxtest check:

int attribs[] = {LOCAL_GLX_DRAWABLE_TYPE,
                   minCaps.alpha ? 8 : 0,
                   minCaps.depth ? 16 : 0,
                   minCaps.stencil ? 8 : 0,

  int numConfigs = 0;
  scopedConfigArr = glx->fChooseFBConfig(display, screen, attribs, &numConfigs);

(If you don’t specify GLX_DOUBLEBUFFER then it defaults to GLX_DONT_CARE).

@aplattner: Thanks - WebGL is indeed working this way. But it seems like WebRender is disbled

about:support contains the following information:
Compositing: Basic
opt-in by default: WebRender is an opt-in feature
available by user: Force enabled by pref
unavailable by runtime: WebRender initialization failed
Failure Log
(#0) Error Failed to connect WebRenderBridgeChild.
(#1) GP+[GFX1-]: Failed GL context creation for WebRender: 0
(#2) GP+[GFX1-]: [OPENGL] Failed to init compositor with reason: FEATURE_FAILURE_OPENGL_CREATE_CONTEXT

Does it make sense to report these as bugs to upstream (i.e. Firefox/Chrome bugtracker) so they can get resolved someday?

It appears to be a problem due to how they are detecting the correct WebGL environment.

Good news: Chrome unstable (80.0.3987.7) works for me with GPU offloading.
I’m currently using the following options:

__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia /usr/bin/google-chrome-unstable --ignore-gpu-blacklist --enable-gpu-rasterization --enable-native-gpu-memory-buffers --enable-zero-copy

Whatever command line option I choose chromium doesn’t want to switch to Nvidia. Even flags

--gpu-testing-vendor-id=0x10de --gpu-testing-device-id=0x1d10

seems to be ignored.
Anyone got it working and confirmed under about:gpu?