Projective Texture Coordinates and GPU_SetScanoutWarping

Additional testing on different setups showed that the above is not entirely correct. Here’s how NvAPI actually applies the texture coordinates UVRQ:

  • For affine transformations, e.g. bilinear warping, the R-coordinate is zero and the Q-coordinate is one. The UV-coordinates are expressed in desktop coordinates.
  • For perspective transformations, the R-coordinate is still zero, but the Q-coordinate depends on the perspective. I won’t go into details on how to calculate it, as this is part of proprietary software. However, once you have found the U, V and Q coordinates (the UV-coordinates are still expressed in desktop coordinates), you should:
    1. Subtract the origin of the source rectangle from UV.
    2. Multiply the UV coordinates with Q.
    3. Add the origin of the source rectangle back to UV.

So, to summarize, I believe the driver first subtracts the source origin from UV before doing the perspective projection and adds the source origin back afterwards.

Each vertex therefor requires 6 floats:
X = expressed in viewport coordinates
Y = expressed in viewport coordinates
U = expressed in desktop coordinates
V = expressed in desktop coordinates
R = 0
Q = perspective factor

, and prior to sending this to the NvAPI_GPU_SetScanoutWarping() call, make sure to calculate:

U = ( U - source.mX ) * Q + source.mX
V = ( V - source.mY ) * Q + source.mY

1 Like