I have a GTX 650, on most drivers (most recently 446.14) the hue and digital vibrance settings in the Nvidia Control Panel apply on top of the brightness/contrast/gamma ones. Starting with 451.67, the order of these settings seem to be reversed, the brightness/contrast/gamma settings are applied on top of the hue/vibrance instead:
The same difference happens on every version of Windows 7 and 10 (as well as between drivers 388.00 and 388.13, since version 1909). On newer GPUs like a GTX 1660 SUPER, the new order is applied in every case.
It’s needed for a project I’m working on where desktop colors change according to music playing. I would like to know exactly the cause of this change (either driver or DirectX related I assume) and if there is anything that can be done to apply these adjustments the old order on any unit, or reproduce them
Hello @DeadAirCK and welcome to the NVIDIA developer forums!
Interesting observation! Thank you!
Did you select the red channel in your pictures on purpose?
Your 446.14 image seems wrong. Setting Brightness/Contrast/Gamma to lowest possible for the red channel only should not result in a red image like that. That might have been an actual bug in the older driver. But of course there is also the unknown of your monitor. I for example get significantly different results between my three monitors here.
That said I think the difference you observe seems rather to be Vibrance/Hue being applied taking the channel differences in account in the newer driver, while it was applied as if all channels had the same setting in the older driver.
For what it’s worth, I think logically the newer setting is correct. If you do per channel changes in B/C/G, you do want to see a difference when applying Vibrance/Hue. Otherwise per-channel changes would not make much sense.
As for DirectX, that is not involved, this is simply the final digital signal sent to the monitor, so calibration of the Monitor in use will affect this a lot.
Regarding reproducing the old behavior or sharing any technical details on the changes, that is nothing I would be able to disclose, I am sorry.
This is probably not the answer you were looking for, but I hope you will find a workaround for your project.
Thanks kindly for the reply.
Unfortunately, I have not noticed any difference regarding the monitor, the same settings have also been tested on three different monitors to identical results.
The red channel was picked at random, and the image shows exactly the behavior of the colors with the highlighted settings. Your observation about the channels does make sense if you think about it; indeed on 446.14, if the hue were 0 there would be no reds in the display, but changing the hue slider changes the entire display hue, regardless of channel settings, likewise with the vibrance at 0 turning the whole display grayscale instead of a single channel.
The deal with the applying order is mostly a matter of personal preference really, (at least for the project in question, I’ve always thought that it allowed for a greater number of color combinations). For example, I’ve had a GT 440 with Fermi architecture for years which behaved the old way in every case, I’ve only recently discovered this difference and I assumed that, since this order has persisted for so long, it must have been changed by mistake sometime. At first I thought it was entirely depending on GPU architecture, but this test on a Kepler GPU has shed more light on a possible driver related cause, considering one unit behaved differently on two drivers.
I’m guessing since I’d be the only person benefitting from including an option of sorts to revert this order, it might be too much to ask, so nevertheless thanks once again for the reply, it’s the first real response I’ve received on this issue :P
You definitely made me curious as well now, having worked in the past on the 2D and Display support for our Tegra devices. Either of the driver implementations is wrong from a color theory and Display logic perspective and I tend to think the old one was.
I asked around and wait for some comments, maybe I can shed some light on this later on.
For now one last thing to check might actually be on the Windows side: Do you have by any chance HDR enabled? With theintroduction of HDR some time ago in Windows there is another parameter which has hue influence Gamma. It should only be the Gamma curve and how quantization kicks in, not really about hue or saturation, but who knows with Windows…
Thanks a lot once more for the help, I’d really welcome multiple comments on the matter.
Neither of my three monitors seem to be HDR capable I’m afraid, so I couldn’t check if it made any significant difference (although something tells me it wouldn’t).
As for something of an extra note, regarding that reproduction of the B/C/G settings I was trying to achieve: I’ve recently discovered I might be able to obtain it through Windows’ Magnifier API, which apparently uses a color matrix applied as a screen filter (although I have yet to look further into whether I can and how to re-enact them there exactly like in the control panel).
So far it makes for a workaround, since the driver’s hue and vibrance once again apply to the entire display without taking channel differences into consideration, and it would no longer rely on that driver difference
I never tried using the Magnifier API, feel free to share your experience here!
I got some feedback from one person involved in Control Panel and color settings development.
Older GPUs had a different color space correction pipeline which in combination with older OS window compositing behavior used the “old” color settings order. With respect to color space theory the complete setup was not entirely correct but got things correctly displayed through tweaks.
Then the GPU HW was adjusted for correct CSC processing, that is the CSC matrix is applied in linear color space before non-linear gamma etc. is applied. The driver and the CPL were kept backwards compatible for a while, but with Windows 10 that was not an option any longer, so the order was switched. I am sure it is documented in some older release notes.
I understand, well in that case I guess the only solution is to create my own personal control panel that mimics Nvidia’s.
I’ve got to admit, while I do have some experience with C#, I’m not a specialist in color space theory (at the moment I’m mostly trying to adapt to my personal needs an existing program that processes the display through a single color matrix using that Magnifier API I mentioned).
Therefore, I might need a bit of insight into how the driver’s BCG settings work, do they also edit a color matrix to change the display? As in, with the right values in a 5x5 color matrix, would I be able to change the BCG for each channel (or all three) just like the driver does?
Whatever you do will be applied after what the driver does in terms of adjustment.
The HW pipeline applies a 3x3 matrix first to do any needed color space conversion (CSC) and afterwards apply Gamma (and brightness/contrast) through a LUT. The final color values are the ones adjusted for the specific Display.
So applying a conversion matrix afterwards will allow you to adjust any values, but you will not be able to reverse settings that were applied by the driver because of the transformation of color values from linear to non-linear space.
Alright, time to bump my own thread with a couple of updates on the issue. I’ve got just a bit smarter since my last reply, so I thought I’d share my recent progress:
First off, forget about the Magnification API, all it can do is apply these color matrices and nothing else, it’s not possible to even grab the magnified image and perform adjustments on it since I’ve read it’s injected directly into the window in question (DWM, undocumented stuff that we can’t meddle with etc.)
And second, the most reliable way (as of now) that I could reproduce the Nvidia BCG settings is through a C++ screen capturing program that applies these adjustments at a specified rate over a window that mirrors everything underneath it. I’ve been tinkering with the source code of a program I’ve found that does something like this (using the ancient GDI and BitBlt, but I’ll be getting to optimization later on, my ultimate aim is to have the GPU do the heavy lifting somehow).
Similar to what Markus said earlier in the thread, it applies these color settings to the screen capture stored in a buffer using lookup tables, on each channel: External Image
and each adjustment using an algorithm that I found in multiple image filter programs: External Image
Out of what I’ve been able to compare (it was a bit harder since you can’t capture the driver output), so far the brightness and gamma work just like the driver given that applying order, but it’s the contrast that doesn’t behave identically. My best guess is that gamma comes over all the others, but depending on whether contrast is applied over brightness or vice versa, it’s either the positive or negative contrast values that work the original way. In other words, it’s as if the brightness and contrast limited themselves, but here’s a clearer example of the difference, the green tick signifying what’s closer to the driver: External Image External Image
This is the closest I’ve been able to get to a re-enactment, if anybody has any advice or opinion or anything, I’d deeply appreciate to hear it since I feel a little stuck at the moment to be honest :P
The very last question I have is, if anybody could advise me on whether there’s a way to employ the GPU in the execution of the program (using CUDA or OpenCV or something from what I’ve read).
I’m not particularly looking for fast performance, but rather to employ the CPU less and have the GPU take on some of the heavier operations, be it the screen capture or color adjustment functions or others. It currently seems to run at maximum 5% CPU usage but I’m feeling somewhat guilty even over that