I tried to use an OptiX transform to scale a triangle mesh, but the result isn’t correct if I chain it with other transforms like rotations.
Initially, I was scaling my meshes by multiplying their vertex coordinates by a float value(figure A). Using the transform matrix, at first, gave a similar result(figure b). However, if I rotate the mesh after scaling it with a transform, it appears inverted on the Y axis(figure C). Rotating it after scaling the mesh by multiplying its vertex coordinates gives me the correct result(figure D), though.
I’m using the optix::Matrix4x4 matrices to set my transforms on the host side, and I use the following rtTransform functions on my intersection programs:
// ... float3 hit_point = ray.origin + t * ray.direction; hit_point = rtTransformPoint(RT_OBJECT_TO_WORLD, hit_point); hit_rec.p = hit_point; // ... hit_rec.normal = optix::normalize(rtTransformNormal(RT_OBJECT_TO_WORLD, normal)); // ...
I know we need the correct the normals when scaling with matrix operations, but I believe the rtTransformNormal does that, is that right? Am I missing something?