How to transform a vector to be in triangle orientation?

Ok I can calculate the normal of a triangle, and I have another vector that want to perform an operation in with respect to the triangle normal. This other vector is in its own reference space, what should I use to transform?

Thanks.

Hi @user131777,

Have a look at the optixTransform{Point,Vector,Normal}* functions in optix_7_device.h. You can also get fancier with your traversal transform stack if you need, as outlined in the “Transform List” section of the OptiX Programming Guide: https://raytracing-docs.nvidia.com/optix7/guide/index.html#device_side_functions#transform-list


David.

I only see things like optixTransform from World/Object space, and not general transforms. Am I right to say that I am not transforming from either object or world.

I wouldn’t know what space your other vector is in. “Object space” is a term that commonly represents a space that is local to a given object/mesh/primitive, etc., so maybe your vector is in the object space of a different object than the triangle you’re working on? OptiX’ use of “object space” is referring to whatever coordinate system you used to define primitives when building a GAS. “World space” to OptiX matches the space of your root scene node, usually the IAS that you trace against in your raygen program.

The general transforms are available in the transform list part of the API, the Programming Guide link I posted. These functions can be used to query the matrix transform at any level of the hierarchy, and used to construct a composite transform that will take you from any space you’ve told OptiX about to any other space.

If you put an instance transform on top of the GAS as it’s connected to the IAS/scene, then that transform defines the conversion from the world space of the scene to the object space of the GAS. If you have a multi-level scene with nested instance transforms, then the conversion requires iterating through a list of transforms and collecting the combined transform, and then applying that to a point/normal/vector. This is what the convenience functions optixTransform* do for you, and they are best used when you only have one or two things to transform. If you have a lot of data to transform, or if your code to do transformations is branchy, then it may be more efficient to query the transforms and apply them yourself.

So, given your description, I would guess that your goal is to either convert both your vector and your triangle into world space, and do the computation there, or figure out the local->world transform for your vector, and use OptiX to query the world->object transform for your triangle, multiply the two transforms together, and apply the combined transform to the vector, so that it will end up in your triangle’s object space.


David.

Ok I can calculate the normal of a triangle, and I have another vector that want to perform an operation in with respect to the triangle normal. This other vector is in its own reference space, what should I use to transform?

If you’re using two different coordinate spaces and want to convert vectors from one coordinate space to the other, you need a basis transformation.
For that you’re usually using an ortho-normal basis of the coordinate space you want to convert into or out from.
Then the transformations are simply three dot products to convert a vector into the ortho-normal basis space and a matrix multiplication on the vector to convert it back to the outer space.
This is usually done when you’re doing shading calculations (== the BSDF implementation) in object (or texture) space while the ray directions are usually in world space outside the shading code.

I’m using a small helper class named TBN (for Tangent, Bitangent, Normal) to generate ortho-normal bases and transform between them.
Example code here:
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/rtigo3/shaders/shader_common.h#L81
Used inside the GGX shader to sample and evaluate directions in object space:
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/rtigo3/shaders/bxdf_ggx_smith.cu#L228

Note that a single normal vector is not enough to generate a consistent ortho-normal basis orientation at runtime on round objects like a sphere with that. You must have a reference tangent vector as well to not get discontinuities in anisotropic materials or bump maps. That’s why, for example, the runtime generated meshes in my examples calculate the geometric tangent as well which matches the reference tangent in texture space so that I could implement bump mapping with that tangent and a bump normal for the texture space ortho-normal basis.
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/rtigo3/src/Sphere.cpp#L79

Thanks, the TBN snippet helped me get on track. I am trying to use this to utilize a normal map, and it is mostly working now. I have a question about the tangent calculation. How can I get the tangent based on the orientation of the normal map texture?

If you need a texture space ortho-normal basis, you would need to calculate the derivatives of the texture coordinates in the triangle plane to get reference vectors for the tangent (and bitangent).

This is the first hit when searching for “texture tangent space” which seems to explain it nicely: https://learnopengl.com/Advanced-Lighting/Normal-Mapping
Here is another one: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-13-normal-mapping/