What are the chances that Nvidia can at least fix the performance regression compared to Windows without such a huge redesign? I don’t need them to beat AMD in terms of VKD3D, I just want it to be on par with Windows at the very least.
Sigh, if it’s the case that the performance regression can never be fixed without an architectural change, my next GPU will for sure be AMD then. There’s not a chance in hell I’ll be going back to Windows.
It’s a shame because I’ve been a long time Nvidia customer because I like DLSS, raytracing, and their AI support, but the 15-30% performance regression is just way too much to stomach. The moment AMD comes out with a high VRAM card, I am gone.
People are still only guessing why the Nvidia drivers perform so badly with vkd3d-proton.
With the code being closed and Nvidia being silent about this, there’s not much to to…
Well, TLOU and Oblivion remastered are actually two known games with potential issues in Linux with the 9070 XT but overall the card performs really well considering it’s RDNA4 architecture which is brand new. Besides, if RADV doen’t work well AMD users can just switch to AMDVLK on the fly instead.
Indiana Jones TGC run better with amdvlk for example, probably because raytracing still needs work in radv.
The weird thing about the vkd3d performance issue is that it follows every Nvidia architecture and it’s also present under Windows to some degree, so it’s core driver issue.
I receive a 10% performance penalty across the board when rendering plasma on my iGPU if hdmi is plugged into my dGPU. However when rendering on my iGPU and also plugged into the iGPU, I get a 10% performance increase. This is relative to a performance baseline of rendering plasma on the dGPU and plugged into dGPU.
Hardware includes an Asus b650 mobo, 9950x and 5070ti. Running fedora 42 with latest testing nvidia driver (575.64).
I don’t want to offend or humiliate anyone, but sometimes these guesses feel like tarot card readings. I’m eager for confirmation from Nvidia and to know how the developers plan to fix this and what their plans are beyond what they’ve already published about wayland, etc., and how long it might take, but I doubt it will happen.
Considering all the work they did with explicit sync and Wayland, I wouldn’t be surprised if they allocated some resources into a deep architectural redesign of their memory driver subsystem.
Shouldn’t both these issues heavily affect every GPU workload? Then why is there zero regression in Vulkan-native games? (if not negative regression like we can see in Cunningham’s test, meaning performance on Linux is usually slightly better).
Let’s also not forget about the virtually absent regression on DXVK and the yet still-present regression when using VKD3D on Windows (source)
It’s very strange because all the data should point out to vkd3d being the problem, but we also know this is not the case.
What is sure is that the drivers are not monolithic. We know it to be the case for BG3 (using VK), Doom Eternal and Indiana Jones and TGC, because these games require to set __GL_13ebad=0x1 to obtain normal performance, whereas not setting it leads to degraded performance. This suggests there are several paths of optimisation possible within the same driver. I do not know if this is a good or a bad thing.
That could easily be checked using the Tracy tool (from the git issue) or strace ioctl with Vulkan environments, running a Vulkan game, and compare to the DX12 one
Unfortunately it seems that Nivida will have no competition in AI DC market, while AMD will have no competition on Desktop/Gaming market (well, Intel maybe). This is bad news for users on both of these markets… Fans of conspiracy theories may even create one here ;-]
That’s because native Vulkan titles grab a few big VkDeviceMemory blocks at load time and do all further sub‑allocations in user‑space, they never hit the driver’s per‑alloc blocking ioctl in the render loop, hence zero runtime regression.
That’s could not be true. I misunderstood. Corrected by pixelcluster.