Reporting graphics driver bugs?

What is the correct channel for software developers to report graphics driver bugs to the driver development team?

If you have a solid reproducer we can file bug reports with the required information to let our QA verify and our driver teams investigate.

You could either post it here (paper clip icon when hovering over one of your submitted posts), or if confidentiality is required (unannounced products, licenses required, etc.) or the amount of data is too big, we can contact you and send an ad-hoc FTP account for data exchange.
Just let us know your preferred way.

With respect to bug report details I normally use this checklist to reduce turnaround times.
This list was centered around display driver and GLSL issues. In general, exact versions and settings of anything needed to repro the issue is required.

We normally need the following information to start analyses of bug reports.
(This is the general list and might not apply to all reports.)

  1. Operating system version.
    On Linux, an nvidia-bug-report.log generated by running as root.
  2. Graphics hardware.
  3. Graphics driver version.
  4. Display Control Panel settings for screen resolution, monitor configs, and driver settings.
    Under Windows: NVIDIA Control Panel -> Help -> System Information -> Save.
  5. Reproducer project.
    At least an executable which shows the problem. The simpler, the better.
    Make sure all necessary files to run this standalone are included (manifests, runtimes). Assume a clean test system!
    Source code in failing state highly appreciated.
    For GLSL compiler failures (C9999), the minimal set of shader sources reproducing the problem.
  6. Description of single steps to reproduce the problem.
  7. Description of the expected result (screenshots if possible).
    Performance issues require absolute measurement data and a description of how to reproduce them.
  8. If there is a crash in an NVIDIA module, the exact crash offset.

For end user issues this is not the correct forum.
There is and

I do not have any issues to report at this time, but wanted to get my protocol established in case anything does come up. Thanks for the information.

Hi Detlef,

In relation to some product optimizations that we are currently working on we seem to have stumbled on an Nvidia driver bug on Win 8.1 + . We have verified that Intel and AMD GPU’s do not have the same issue, so we are pretty confident that this is an Nvidia bug.

Would be great if you can PM me with an email channel for your driver QA team, so we can report what we are trying to achieve and how to repro the bug.

Best Regards,
Henrik Levring

Hi Henrik,
if you have a solid reproducer for a driver issue you should be able to file a bugreport with all required details to reproduce it directly through the “Customer Feedback” links at the bottom of this site:

Thanks for the guidance, we have submitted the issue: Question Reference #150402-000036

i have new install windows 7 and install nvidia new version driver but my areo theme not working.plss help me


i have build VRWorks-Graphics-4.12 engine for UE4

from this build

i got some error

my vc2015 is original iso
is this problem?

and i setup, also had error in MSB3075 , i have replace WindowsPlatformCompilerSetup for sp3 problem,also error

any one could check these error?

nvapi is not work

Since I don’t want to waste my time at pos 20 in your support queue I am posting the bug report here:

The bug can be reproduced using my 2 year old WebGL page:

The linked screenshot shows the expected correct rendering (left) and the defective rendering (right):

The Web page works flawlessly on WinXP with your respective old drivers (and whatever browser: Chrome, FF…). And according to the feedback I received from various colleagues, it still works on Win10 (as long as one is using non-NVIDIA cards (e.g. Intel HD GFX, amd 5450).

I just freshly installed a Win10 (64-bit) PC from scratch (using my old GTX 460 card that I had successfully used with my old WinXP setup). I installed the latest NVIDIA drivers that I could find on your site; System Information 09-24-2016 21-17-00.txt. And on this Win10 PC the WebGL on the web page is no longer displayed correctly - whether I try it in Chrome, FF or Edge (another colleague confirmed that he has the same problem on his machine: 10 Home, 64-bit with GTX 560 Ti).

From what I see the fragment shader output of my WebGL code no longer produces the correct output (see file: orbittrap.js of my above web page) when running it with a recent NVIDIA driver.

Hi Detlef,
I’m not sure if this bug report belongs here as it’s MacOS+OpenCL related but I couldn’t find a better place to post it.
The issue seems to have been introduced with the most recent Web Driver Update(346.03.15f04). The driver refuses to load AppleIR (that worked fine prior to the update) if kernel has texture as its argument. Please find the details attached.
Also, let me know if I should provide more info or move it elsewhere.
nv_report.txt (1.64 KB)
loadbin.gz (3.18 KB)

Hello. Note that there is a conflict between GeForce Game Ready Driver latest version 378.49 date 34.01.2017 and the application Maven (latest version 3.3.9) used by thousands developers around the world. With this driver version, when using Eclipse IDE, the console capturing the standard output is blank; when using IntelliJ IDEA, the Maven plugin is mute. I succeeded in capturing this error message not explicit: llij.ide.plugins.PluginManager - Cannot reconnect. I found this bug because it already happened in the past. So I have just downgraded the nvidia driver to the previous version (version 376.33 ; date : 14.12.2016) and Maven is working fine as before. Please solve this bug with the next version. Thanks in advance.

Hello there!

I have a possible driver issue that we’re seeing on GTX 760 cards, where there’s a crash only on these cards with our software. It works ok on most other cards, including more recent GTX’s, and lower-spec cards, e.g. Iris 5200.

Card: GTX 760
Driver: 378.78

This is the app for repro:


Simul Software Ltd


The latest driver 384.76 update causes our Vulkan application to run significantly slower in comparison with previous driver versions, but other than a slowdown everything seems to be the same. We have localized the shaders that are causing the slowdown with renderDoc, but it is also our slowest and most important shaders so this is very inconvenient. We are running windows 10 and have tested this on GTX 1070, 1080 and 1080 ti.

Viktor Alm


I have a behavior bug on my computer that is most likely due to a bug in the NVidia driver.

When rendering a large rectangle (likely decomposed in 2 OpenGL triangles) with the Visualization Toolkit (VTK), with the camera being very close to the plane, one pixel, in the center, has a color that is slightly different from what is expected. The color of that pixel seems to be some average of the foreground and the background. The background color bleeds through the foreground object.
Additionally, the problem disappears if multisampling is deactivated. It looks like one of the subpixel color is not computed correctly and its color is used when computing the whole pixel color.

The small Github project here allows to recreate the error:

Why it is most likely a bug in the drivers:
The problem only appears on Ubuntu 16.04 (not Windows 10, not MacOS), and disappears when using the Intel graphics card instead of the NVidia one, on the same computer.

Graphics card: NVidia Quadro M1000M
The problem appears with any driver that I was able to test (including 396.18) on Ubuntu 16.04. The drivers were downloaded from:


I have implemented the Occlusion Culling Project based on NVIDIAs raster method with Vulkan, (based on

It has worked so far very well until driver version 397.31. The exe crashes now. After some research i found the cause. To my configuration:

I have an octree where the leaf nodes have geometries. Each leaf node has its NodeId and Bounding Box.
Bounding boxes are stored in a storage buffer that is big enough to store all leaf bounding boxes. (Num Leafs * size of bounding box representation).
Additionally i have a storage buffer (visibility buffer) of size numLeaves * sizeof(unsigned int). Before OC step the visibility buffer is set to zero. Then if the node is visible the fragment shader sets it to 1 (vis_node[node_id] = 1). Further steps evaluate and utilize the results.

After rendering the old visible pass I use point rendering and provide nodeIds as points to trigger vertex shader. Vertex shader passes through the nodeIds to the geometry shader. The geometry shader creates Bounding Box vertices (just like in the project on GitHub) and routes (unchanged) the nodeids to the fragment shader.

The Fragment shader looks something like that


layout (early_fragment_tests) in;

layout (binding = 0) buffer BB // storage buffer to mark visible nodes
   uint node_vis[];

layout (location = 0) flat in uint node_id; // ==> input from geometry shader (vertex shader)

void main()
   node_vis[node_id] = 1;

The error:
The shader crashes because the node_id gets some invalid values that cause access violation on the vis_buffer
It seems the robust buffer access feature is broken.

so if i do this

void main()
if (node_id >= 0 && node_id < max_num_leafs)
    node_vis[node_id] = 1;

It does not crash. The bug seems to be fixed in 397.40 (beta). However there is another issue
with both driver versions (397.31 with the if in the main())
the node ids seems to be unstable:
Same scene, no camera movements , even if depth test is off (per pipeline and/or by removing early depth test flag).

Some object dont pass the visiblity test (e.g have 1000 NodeIds but visible are only 900 or it alternates)
The problem is that wrong objects are culled away.

This should not happen, actually in can not happen, particullary if the depth test is off.

Further debugging has shown, that the input NodeIds not only exceeds the max num of nodes, but does not provide correct ids:
Lets take 10 NodeIds = max_num_nodes (depth test off)
Correct behavior: NodeIds should be 0,1,2,3,4,5,6,7,8,9 and maybe more (but robust buffer access should catch it)
Behavior with the newest drivers: 0,1,50,4,88,7,8,100 and so on but the ids 2,5,6,9 seem never to appear (getting lost some how).
And sometimes its alternate (with large meshes with many nodeIds).
Even if the nodeIds output from geometry shader are hardcoded to some value (e.g. 0), the input nodeIds in the fragment shader have different random values.

It seems the error is somewhere between the geometry and the fragment shader and the data that should be passed between them is getting corrupt.

It works with older official drivers (tested some versions before the 397.31) and it also works with the 389.20 (.10)
Tested it on different hardware (680 GTX, 960 GTX, 1080 GTX). OS: Win10x64 (no issues with older driver, issues with the newest on every test set up)


The newest driver 397.55 and 397.54 still have the issue (Official Beta Driver (397.55) crash; the vulkan beta driver (397.55) instable (described above)

Update: I have written small seflcontained example that reproduce this bug:

It was created with Microsoft VS 2015.

The important places in code to look at:
bbox.geom - creates bounding boxes and passes throught the node_id to the fragment shader
bbox.frag - receivces the node id as uint with flat modifier. In the main of the shader is the if that shows the issue. If received node id is invalid then bboxes get the red color and white if the node id is valid. If the program runs with bugged driver then color of the bboxes alternates randomly.

Please also read the comments at very begin of the main.cpp

Update: tested with 397.96 issue is still there


We are running an experiment on Matlab on Ubuntu 18.04.2 LTS with GF106GL Quadro 2000. We are using Nvidia 3d Vision to view our stimuli and we seem to have a driver issue causing the glasses to flicker (on/off) later in the task, which we think is due to faulty syncing. We think it’s a driver issue and not a hardware issue because we’ve tried to run the same thing on Windows and we don’t get the same problem.

Can someone point us to the correct driver for our system?



  1. Windows 10 ver1909
  2. RTX 2070 MAXQ Lenovo Y740
  3. Graphics driver version v451.67
  4. NVIDIA System Information report created on: 10/13/2020 01:06:37
    System name: LAPTOP-SD18KMJD

Operating System: Windows 10 Home, 64-bit
DirectX version: 12.0
GPU processor: GeForce RTX 2070 with Max-Q Design
Driver version: 451.67
Driver Type: DCH
Direct3D API version: 12
Direct3D feature level: 12_1
CUDA Cores: 2304
Core clock: 1185 MHz
Memory data rate: 12.00 Gbps
Memory interface: 256-bit
Memory bandwidth: 384.06 GB/s
Total available graphics memory: 24535 MB
Dedicated video memory: 8192 MB GDDR6
System video memory: 0 MB
Shared system memory: 16343 MB
Video BIOS version: 90.06.2E.40.0F
IRQ: Not used
Bus: PCI Express x16 Gen3
Device Id: 10DE 1F50 3FEE17AA
Part Number: 4914 0030


nvui.dll NVIDIA User Experience Driver Component
nvxdplcy.dll NVIDIA User Experience Driver Component
nvxdbat.dll NVIDIA User Experience Driver Component
nvxdapix.dll NVIDIA User Experience Driver Component
NVCPL.DLL NVIDIA User Experience Driver Component
nvCplUIR.dll 8.1.940.0 NVIDIA Control Panel
nvCplUI.exe 8.1.940.0 NVIDIA Control Panel
nvWSSR.dll NVIDIA Workstation Server
nvWSS.dll NVIDIA Workstation Server
nvViTvSR.dll NVIDIA Video Server
nvViTvS.dll NVIDIA Video Server
nvLicensingS.dll NVIDIA Licensing Server
nvDevToolSR.dll NVIDIA Licensing Server
nvDevToolS.dll NVIDIA 3D Settings Server
nvDispSR.dll NVIDIA Display Server
nvDispS.dll NVIDIA Display Server
PhysX 09.19.0218 NVIDIA PhysX
NVCUDA64.DLL NVIDIA CUDA 11.0.208 driver
nvGameSR.dll NVIDIA 3D Settings Server
nvGameS.dll NVIDIA 3D Settings Server
5. I am facing Screen Flickering in Dedicated Graphics mode selected in BIOS with any Newer Version of Nvidia Drivers I had to back roll to this driver to get my system working. I am able to remove the flickering if HDR is turned to off.

Kindly look into this I am facing this in all applications, desktop et all.

hi guys, can i run PES2021 with this system??

CPU: Core i7 4702MQ 2.20GHz up to 3.20 GHz

I just installed the GeForce Game Ready Driver for Windows 10. After doing some digging, I found that when I have my gaming controller plugged into my laptop, this driver keeps my computer from the screen saver and from going to sleep. In order for my computer’s screen saver to work and for it to go to sleep, I have to either unplug my controller or deactivate GeForce Game Ready Driver. Can you please make an update that allows the computer to use the screen saver functionality and sleep mode when the computer and controller are idle? Thanks!