Normals on Ellipsoids


On rendering particles as unit spheres everything works fine, but obivously calculation of normals
for an ellipsoid fails in my code. OptiX API uses RH and Y UP axis, right?
The particles (using water material desinged as in OptiX Introduction in Advanced Samples) are actually
ellipsoids. Each of them has 3 vectors (U, V, W) and a 3D-scale. W is the normal vector when applied with ellipsoid spatting on a rasterizer.
Intersection checking is done by simply using a sphere on transformed RayOrigin and transformed RayDirection.
The transform is performed using an inverse matrix containing all U,V,W vectors and xyz scales. That seems to work.
The normals I simply calculate from the spherical coord angles theta and phi; based on this answer:
That calc seems to be Z UP axis, so I changed my code to Y UP axis.
But yet the normals are invalid (cause the reflection+refraction of the water material should behave “upside-done” in an ellipsoid also)

I’m glad for any advice, cause this took hours, but no result yet…

in my source file

OO = OO - center;   
 float radius = sphere.w;
#else  // then ellipsoid       // then apply inverse matrix  (rotation and scale on Origin OO and direction DD)
 const float radius = 1.0f;  
 OO = make_float3( make_float4(OO.x, OO.y, OO.z, 1.0f) * invTransMatrix );
 DD = make_float3( make_float4(DD.x, DD.y, DD.z, 0.0f) * invTransMatrix );  
 float len = length(DD); float _1div_len = 1.0f / len;
 DD = make_float3(DD.x * _1div_len, DD.y * _1div_len, DD.z * _1div_len); // unit length

 //   ...    calculate intersection as on a unit sphere (as in  of optixMDLsphere SDK sample)

 // on potential intersection:
    float3 p = (OO + (root1 + root11)*D);
      shading_normal = geometric_normal = p/radius;
       const float phi = atan2f(p.z, p.x);                           // Azimuth angle  -180° .. +180°
       const float theta = acosf(clamp(p.y, -1.0f, 1.0f));  // Polar angle        0..180°

        // pre-calc sin+cos for phi and theta
        float sin_phi, cos_phi, sin_theta, cos_theta;
        sincosf(phi, &sin_phi, &cos_phi); sincosf(theta, &sin_theta, &cos_theta);
/*      // is Z UP coord obviously   (here the transform matrix also contained scaling)
        float nx =  cos_theta * cos_phi;
        float ny =  cos_theta * sin_phi;
        float nz =  sin_theta;
        // Y UP                          // the TransMatrix only contains the rotation; no scaling
        float nx =  cos_theta * cos_phi  / scales.x;
        float nz =  cos_theta * sin_phi   / scales.z;
        float ny =  sin_theta  / scales.y;

        float3 norm = make_float3(nx, ny, nz);

        // rotation transform this point back to final coords
        norm = make_float3( make_float4(norm.x, norm.y, norm.z, 1.0f) * TransMatrix );   // point transform (w=1)  direction transform (w=0)
        norm = normalize(norm);

      shading_normal = geometric_normal = norm;

What space is your point p in? It’s being computed using a point D, which isn’t shown here.

At a glance, I would guess that your point p isn’t correct. In order to compute the azimuthal and polar angles, p needs to be a local space point, oriented so that p’s x,y,z coordinates are aligned with the ellipse’s major and 2 minor axes. Also p needs to first start on the surface of the ellipse, and then get normalized in order to pass it directly to atan2f() and acosf(). I don’t see that happening, so I’d guess that’s the root of the issue.

A couple of general comments:

It looks like you might have a matrix that transforms the sphere into an ellipse? If you do, you can use it directly to compute normals by using the inverse transpose matrix.

If using the inverse transpose matrix isn’t an option for you, you can also likely avoid all the trig functions like atan2f() and acosf() by using the identities cos(theta)=dot(a,b) and sin(theta)=length(cross(a,b)), where a and b are normal vectors and theta is the angle between them.

It will be much easier to debug this if you can visualize the normals directly. Using a render of refraction makes it tougher to know if it’s right. I’d recommend finding a code environment where you can draw the normals as lines or cylinders. There are lots of ways to do that, but if you want OptiX to be that environment, you might create some procedural cylinders that stick out of the ellipse in a grid of UV locations, or something like that.

@dhart, Thank you very much for your answer.

“p” is the normal on the unit sphere (and at the same time its a point on the surface)

“D” is assigned with DD (in original of the optixMDLSphere sample SDK 5.1.1 its “ray.direction”, but that is already transformed to “DD”);
Unfortunately I forgot to rename “D” to “DD” in line 15 in the post.

So now I simply transform that unit-sphere-normal using the transposed-inverse-matrix to the ellipsoid-normal.
And changed the order the matrix was created. Now: matrix = scale_matrix * rotation_matrix;

Calculating the “t” is required and so I need the 3 matrices.

intersection program code :
host code: :

I did some quick and dirty color-based normals visualization (see NormalsVisualization.jpg in the attachment)

On all rendered cases I use the exact same water material. For visualizing normals
in the program I simply only did this to convert the normal XYZ -1.0 … +1.0 to RGB 0.0 … 1.0 range :

float3 norm = state.geom_normal;
  float3 normals_color = make_float3(norm.x * 0.5f + 0.5f, norm.y * 0.5f + 0.5f, norm.z * 0.5f + 0.5f);
  thePrd.radiance = normals_color;

(so red channel = +/-X of normal green channel = +/-Z channel blue channel = +/-Y of normal)

On using the normals through transposed inverse matrix now the normals seem to be much more similar as on reference unit sphere,
but the result is better, when the x and z component of them is inversed; but still invalid if camera is looking to them from above.


I finally solved it. Normals ok and visually ok after updating the intersection handling. (see Ellipsoids.jpg)
Thanks again @dhart, your answer really helped me a lot.