Issues with t-values in Intersection Program

Hey guys,
I am working with OptiX 6.5.

The issue I am see is that t-values computed via length( hit-ray.origin ) are incorrect when there is a transformation applied.

Here I have isolated the relevant portion of code:

RT_PROGRAM void mesh_intersect( int triIdx )
{
    if ( intersect_triangle( v0, v1, v2,  n, t, beta, gamma ) ) {

          hit = ray.origin + t * ray.direction;    // Line 0. initial hit

          // .... {more user code to modify hit} .... currently disabled

          t = length ( hit - ray.origin );           // Line 1. get t from modified 3D hit
          hit = ray.origin + t * ray.direction;  // Line 2. recompute hit from t

          bc = getBarycentricInTriangle ( hit, v0, v1, v2 );   // Line 3. Get BCs
          uv = vuv0 * bc.x + vuv1 * bc.y + vuv2 * bc.z;           // Line 4. Get UVs
          
          // ... {potentialIntersect, modify attribs, reportIntersect}... 
}

The problem occurs with Line 1.
When that line is commented out, then Line 2 is equivalent to the initial hit result, and looks correct.
However, when I uncomment Line 1, which is necessary when the user-code modifies the initial hit position, then the t value is incorrect.

I understand that Intersection programs are in OBJECT space, but shouldn’t the the ‘hit’ and ‘ray.origin’ which are used to get t in Line 1 already be in object space?

Hi @ramahoetzlein,

Yes, I believe rtCurrentRay is given in object space when in an intersection program, according to section 4.1.6 in the OptiX 6.4 Programming guide, titled “Program variable transformation”. What indication do you have that these values are not in object space? How is your recomputed t value incorrect? What if you leave Line 1 uncommented, but don’t change the hit point?

The expression length(hit - ray.origin) doesn’t directly depend on a space, it only depends on hit and ray.origin, right? So if Line 0 is correct in the first place, that seems to suggest that ray.origin and ray.direction are both correct, and that moving the hit point is doing something you didn’t expect, perhaps? Or is it possible that Line 0’s hit result is also not what you expected? How does moving the hit point work?

An easy way to validate the ray’s space would be to print it out for a specific pixel from both raygen and intersect, while using a non-identity transform. You’ll be able to see if they’re the same or different.


David.

Hi David,

The hit in Line 0 is computed directly from the t-value coming out of triangle intersection - it is correct.

Note that the user-code to modify hit is not required to demonstrate this issue. The reason I am recomputing t-value (rather than using hit) is that I do want to use that code in the future.

But to demonstrate the issue, only Lines 1 & 2 are needed.

I have not tried individual pixel inspection yet, but I am able to generate a visualization of the issue. In these examples the barycentric coordinates are visualized as red, green, blue for each corner.

Here’s what I mean…

Here is the code that produces the image above:

if ( intersect_triangle( v0, v1, v2, n, t, beta, gamma ) ) {

 hit = ray.origin + t * ray.direction;    // Line 0

 if ( rtPotentialIntersection( t ) ) {
     bc = getBarycentricInTriangle  ( hit, ve0, ve1, ve2 );    // Line 3
     uv = vuv0 * bc.x + vuv1 * bc.y + vuv2 * bc.z;     // Line 4 
     colorize = make_float4( ( bc.x, bc.y, 1.0-bc.x-bc.y , 1);
     front_hit_point = hit;
     rtReportIntersection(0);
 }

}

In the correct image, the initial hit and t value are obviously varying over the surface of the triangles. The t-value is required to return a potential intersection, and barycentric coords are recomputed from the ‘hit’ to derive the uv-coordinates. In the above example these are correct over the triangles shown.

And here is the incorrect image…

And the code that generates the incorrect result:

if ( intersect_triangle( v0, v1, v2, n, t, beta, gamma ) ) {

 hit = ray.origin + t * ray.direction;    // Line 0

 t = length( hit - ray.origin);               // Line 1. retrieve t-val
 hit = ray.origin + t * ray.direction;    // Line 2. recompute hit

 if ( rtPotentialIntersection( t ) ) {
    bc = getBarycentricInTriangle  ( hit, ve0, ve1, ve2 );    // Line 3
    uv = vuv0 * bc.x + vuv1 * bc.y + vuv2 * bc.z;     // Line 4 
    colorize = make_float4( ( bc.x, bc.y, 1.0-bc.x-bc.y , 1);
    front_hit_point = hit;
    rtReportIntersection(0);
 }

}

The image is un-modified, those orange and reds represent the actual barycentric coords. I would expect it to be identical to the first one since the hit is unmodified, where each corner should be the maximum r,g,b value corresponding to barycentric extrema.

The above is the actual code used to make those images. No user-code is required to show this. The only difference in the incorrect image are Lines 1 and 2 having been inserted to re-evaluate t.

This occurs when there is a Transform applied, in both images there is a model scale of <2,2,2>. When the identity transform is used it doesn’t occur.

Is your ray direction normalized? Line 2 is only correct if length(ray.direction) == 1. Try replacing line 2 with hit = ray.origin + t * normalize(ray.direction). Maybe your transform has a scale factor?


David.