Projective Texture Coordinates and GPU_SetScanoutWarping

I’m trying to create a perspective warp using NvAPI_GPU_SetScanoutWarping. Four vertices, one for each corner, are specified in the warping data structure. Each vertex contains a 2D coordinate defined in viewport space, as well as a 4D texture coordinate defined in desktop space. My primitive type is a triangle strip.

For each corner vertex, I calculate the proper w-coordinate for the texture, to facilitate perspective warping. I assume the texture coordinate is internally interpolated using “rgb = textureProj( tex, coordinate.xyzw )”, which is equivalent to “rgb = texture( tex, coordinate.xy / coordinate.w )”.

The warp is correct for displays where desktop coordinates run from (0,0). However, results are wrong for subsequent displays, e.g. with a desktop area of topleft (1920,0), bottomright (3840,1080).

Can you confirm that:
a) texture coordinates for all warps on all displays should be specified in desktop coordinates, where these coordinates are obtained using NvAPI_GPU_GetScanoutConfiguration.
b) texture coordinates, when used in the call NvAPI_GPU_SetScanoutWarping, are internally interpolated using projective texturing

If not, tell me what I should do to fix my perspective warps.

-Paul

For anyone’s information:

turns out coordinates are different for Mosaic setups and non-Mosaic setups.

Two full-HD displays positioned horizontally next to each other in a non-Mosaic setup will yield:
Desktop 1: [0,0]-[1920,1080]
Source 1: [0,0]-[1920,1080]
Desktop 2: [1920,0]-[3840,1080]
Source 2: [1920,0]-[3840,1080]

Texture coordinate top left:
x = desktop.left;
y = desktop.top;
z = 0;
w = perspective factor;
Texture coordinate bottom right:
x = desktop.left + desktop.width * perspective factor;
y = desktop.top + desktop.height * perspective factor;
z = 0;
w = perspective factor;

Two full-HD displays positioned horizontally next to each other in a Mosaic setup will yield:
Desktop 1: [0,0]-[1920,1080]
Source 1: [0,0]-[3840,1080]
Desktop 2: [1920,0]-[3840,1080]
Source 2: [0,0]-[3840,1080]

Texture coordinate top left:
x = desktop.left * perspective factor;
y = desktop.top * perspective factor;
z = 0;
w = perspective factor;
Texture coordinate bottom right:
x = desktop.right * perspective factor;
y = desktop.bottom * perspective factor;
z = 0;
w = perspective factor;

Additional testing on different setups showed that the above is not entirely correct. Here’s how NvAPI actually applies the texture coordinates UVRQ:

  • For affine transformations, e.g. bilinear warping, the R-coordinate is zero and the Q-coordinate is one. The UV-coordinates are expressed in desktop coordinates.
  • For perspective transformations, the R-coordinate is still zero, but the Q-coordinate depends on the perspective. I won’t go into details on how to calculate it, as this is part of proprietary software. However, once you have found the U, V and Q coordinates (the UV-coordinates are still expressed in desktop coordinates), you should:
    1. Subtract the origin of the source rectangle from UV.
    2. Multiply the UV coordinates with Q.
    3. Add the origin of the source rectangle back to UV.

So, to summarize, I believe the driver first subtracts the source origin from UV before doing the perspective projection and adds the source origin back afterwards.

Each vertex therefor requires 6 floats:
X = expressed in viewport coordinates
Y = expressed in viewport coordinates
U = expressed in desktop coordinates
V = expressed in desktop coordinates
R = 0
Q = perspective factor

, and prior to sending this to the NvAPI_GPU_SetScanoutWarping() call, make sure to calculate:

U = ( U - source.mX ) * Q + source.mX
V = ( V - source.mY ) * Q + source.mY

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.