364.12 - GTX 660 - Can't access videolut to load display profile

dispwin -v -d1 profile.icc

outputs this error:

About to open dispwin object on the display
Argyll 'V1.8.3' Build 'Linux 64 bit' System 'Linux #1 ZEN SMP PREEMPT Tue Mar 15 18:33:17 UTC 2016 4.5.0-1-zen x86_64'
Dispwin: Error - We don't have access to the VideoLUT for loading

It works with previous versions of the driver.
nvidia-bug-report.log.gz (267 KB)

That’s strange. This driver didn’t change anything about what is available to applications, but it did change the size of the RandR gamma ramp. I’ll see if I can look into why this is confusing dispwin.

Thank you.

I found the problem in the argyllcms source code:

1139│ /* For VideoLUT/RAMDAC use, we assume that the number of entries in the RAMDAC */
1140│ /* meshes perfectly with the display raster depth, so that we can */
1141│ /* figure out how to apportion device values. We fail if they don't */
1142│ /* seem to mesh. */

and

1283│                 if (nent != (1 << p->pdepth)) {
1284├>                        debugr2((errout,"XRRGetCrtcGammaSize number of entries %d mismatches screen depth %d bits\n",nent,(1 << p->pdepth)));
1285│                         return NULL;
1286│                 }

The assumption that the gamma ramp size matches 1 << depth is no longer true now that the 1,024-entry gamma ramp hardware is exposed directly to applications. The argyllcms folks are going to have to update dispwin to take the difference into account.

I’ve contacted argyllcms’ mailing list.

Thanks for your time investigating the bug.

If the depths don’t match, then it’s unclear how the extra address bits into the LUT are wired. Are the extra bits at the MS end, LS end, or what ? Are they wired to 0, 1 or something else, such as some of the other address bits ? Where in the XRandR API is this defined, so that this can be handled automatically by the software, for hardware that makes different choices of the above ?

[ i.e. the API would be much cleaner if the hardware didn’t expose this mismatch. ]

Without that being defined, it’s not possible to know what to put in the LUT, hence the assert in the ArgyllCMS code.

There are several factors at play here.

  1. The pixel values are decomposed into fields based on the red, green, and blue masks in the X11 Visual.
  2. The value from each channel of the pixel is used as an index into the X11 Colormap. E.g., for depth 24 with 8 bits per channel, there are 256 entries in the colormap.
  3. Each entry in the colormap has 11 significant bits, which map to a conceptual [0, 1] range.
  4. The color from the colormap goes through Digital Vibrance and the new colorspace conversion matrix, and is blended with the cursor. This blended result still has an 11-bit range.
  5. The final blended result is used as a linearly-interpolated input to the gamma ramp.

So the last step is what’s important for argyllcms, presumably.

I’m assuming that the specific card doesn’t matter, but the driver version does.

With

hwinfo --gfxcard | grep Model
        Model: "nVidia GK208 [GeForce GT 720]"
    nvidia-settings -v
        nvidia-settings: version 364.12 (buildmeister@swio-display-x64-rhel04-01) Wed Mar 16

fwiw, similar error, reported @

https://hub.displaycal.net/issue/new-errors-on-linux64-calibration-curves-could-not-be-loaded-dispwin-error-we-dont-have-access-to-the-videolut

I’m getting a similar issue, also with ArgyllCMS, but this time the problem does not seem to be ArgyllCMS:

visual:
    visual id:    0x21
    class:    TrueColor
    depth:    30 planes
    available colormap entries:    1024 per subfield
    red, green, blue masks:    0x3ff, 0xffc00, 0x3ff00000
    significant bits in color specification:    11 bits

Basically, X.org thinks I have an 11-bit display, even though I’ve specified Depth 30 and DefaultDepth 30.

This seems to be a regression, it worked in the past. (In the sense that xdpyinfo printed 10 when using a 30-bit X.org display)

This breaks exactly the same check in argyllcms, because it’s comparing 1024 (the hardware LUT size) against 2048 (the depth it thinks the display has).

Edit: This temporary work-around fixes the issue, and now I can load calibration curves successfully:

diff -u -r Argyll_V1.8.3.old/spectro/dispwin.c Argyll_V1.8.3/spectro/dispwin.c
--- Argyll_V1.8.3.old/spectro/dispwin.c	2016-03-27 20:37:01.408827322 +0200
+++ Argyll_V1.8.3/spectro/dispwin.c	2016-03-27 20:37:21.476663326 +0200
@@ -4822,7 +4822,7 @@
 		// Hmm. Should we explicitly get the root window visual,
 		// since our test window inherits it from root ?
 		myvisual = DefaultVisual(p->mydisplay, p->myscreen);
-		p->pdepth = myvisual->bits_per_rgb;
+		p->pdepth = 10;
 		p->edepth = 16;
 
 		if (nowin == 0) {			/* Create a window */

haasn, that 11 bits refers to the number of significant bits in each colormap entry. So since your visuals are 10 bits per component, the colormap has 1024 entries, with each entry having an effective range of [0, 2047] (although X still represents them as 16-bit numbers).

The number of colormap entries is always exactly (1 << bits_per_component), but that’s independent from the precision of the colormap entries or, starting with RandR 1.2, the number of entries in the gamma ramp.

@aplattner What are the extra bits used for? To compensate for precision loss due to rounding errors when going through Digital Vibrance, limited/full range conversion etc.?

Hi, thanks for the hint about X11 Visual and X11 Colormap. I’ve implemented a fix to take these into account, but a problem persists. My code determines that the X11 DirectColor framebuffer is 8 bits, and that the X11 Colormap entry depth is 11 bits. But XRRGetCrtcGammaSize returns 1024 entries (10) bits, which doesn’t match the X11 Colormap. How do I determine exactly which entry in the gamma ramp is selected ?
This is important, because a single gamma ramp entry value is used to set high precision color calibration test values, so exactly that entry needs to be selected by the frame buffer value.

If you are linearly interpolating from 11 bits to 10, then I need to know exactly what 10 bit number will result from a particular 11 bit input. (i.e. what precision and rounding is used for the 11 bit to 10 bit interpolation ? Is it exactly equivalent to (in * 1023 + 1023) / 2047 for integer values 0…2047 ?)

Another questions :- is there any way of detecting whether “Digital Vibrance” is turned on, so that I can warn the user that they are not going to get a useable color calibration/profile out of the system, or is “Digital Vibrance” implemented with the CscMatrix ?

Thanks.

My understanding (though I’ll have to double-check it) is that if all other pixel processing is disabled, replicating the most-significant bit of the colormap entry to the least-significant should map exactly to a gamma ramp entry. I’m on paternity leave for two weeks, so it’ll have to double-check after I get back.

Other pixel processing includes “digital vibrance”, the CscMatrix, and dithering. I believe dithering is applied before the gamma ramp, but I’ll have to double-check that as well. Digital vibrance is a separate setting from the CscMatrix. I don’t know whether the matrix is applied before, or after DV.

You can query and control the digital vibrance and dithering settings via NV-CONTROL attributes documented in NVCtrl.h.

Dithering being enabled can confuse ArgyllCMS, incidentally. I have my X display set to depth 30, and one of my displays (connected via DisplayPort) set to 10-bit mode with dithering disabled. ArgyllCMS can detect and calibrate this to 10 bit depth reliably. (testing via dispcal -R. The test completes very quickly)

The other display is connected over HDMI (sadly my GPU only has one DisplayPort connector, even though the display is 10bit capable…) and therefore is limited to 8 bit, so I have temporal dithering enabled in the GPU.

ArgyllCMS’ detection of this display’s effective depth is unreliable - the test frequently needs to be repeated multiple times, and what depth it actually reports is fairly unreliable.

It might be better if ArgyllCMS either disables dithering completely, or just disables temporal dithering, or some combination thereof. (if possible)

Right, I’m not that surprised since detecting display depth pushes the limits of the display stability and instrument precision and repeatability. Adding dithering noise to the mix will almost certainly mess this up, since for speed, it doesn’t have any redundancy in the decision tree.

But (at the moment) this function is purely informative, it has nothing to do with calibration or profiling.

I’m not sure that’s a good idea. The display should be calibrated & profiled in the state it is going to be used.

What I’m more concerned about is whether the high precision VideoLUT values are being used in the test patches or not. Dithering and a lack of clarity on how frame buffer values map to VideoLUT indexes make this less certain than I’d like. I have an idea that may work around this, but it adds yet more complexity.

with the release of Nvidia 364.19

what’s the status of this issue? IIUC, the previous beta was to require a fix to Argyll.

That seems to have not happened – or, did it?

Or, has the issue been addressed in the NV release?

Wondering the same, it’s not clear from the discussion whether it will be fixed on the nvidia side or Argyll side. I’m stuck on the 352 series driver until this is fixed.

and, fwiw, now there are issues with kernel 4.6 ‘vs’ nvidia driver. unfortunately, getting a bit boxed-in …

Any update on this?

I’m on ArchLinux

kernel 4.5.4-1-ARCH
nvidia-settings:  version 364.19  (builduser@felix)  Sat Apr 23 14:31:57 UTC 2016
displaycal 3.1.2.0 2016-03-03T23:20:20.448Z   x86_64
Argyll CMS 1.8.3

And when running displaycal/dispcalgui I get the following error:

Dispwin: Error - We don't have access to the VideoLUT for loading
Dispwin: Error - We don't have access to the VideoLUT for clearing

You can try compiling the latest Argyll dev sources.