How to get the transform matrix of an instance?

If you use an instance transform above a custom sphere primitive, then you do not need to care about the instance transform matrix inside the sphere intersection shader at all because that works in object space.

Just use optixReportIntersection() to return the intersection distance, a specific hitKind value which allows to identify that the hit was on your custom sphere primitive, and the object space normal.

Mind that the hitKind has some reserved value ranges which are used for the built-in triangle and curve primitives.
Explained for optixGetHitKind here:
https://raytracing-docs.nvidia.com/optix7/api/html/group__optix__device__api.html#gaea539824cff7f2f8c3109ce061eb6ffe

When using additional vertex attributes then you need to set the OptixPipelineCompileOptions numAttributeValues to the proper value. Default needs to be two for the triangle barycentrics. You would need three when reporting the sphere normal.

Now everything else happens inside the closesthit program(s).

If you’re using the same closesthit program for different geometric primitives, you need to use the optixGetHitKind() function to determine what primitive type you hit, because the vertex attribute calculation for those might be different.

Then you calculate or read your object space vertex attributes, get the current object-to-world matrix (when needed) and its inverse, the world-to-object matrix, which is used to transform the object space normals into world space.

Depending on the currently active transform hierarchy for that specific hit, getting the concatenated matrices is more or less involved.
The OptiX SDK provides helper functions doing this, which handle the most general case, including motion transform matrix interpolations.
You’ve found them already optixGetWorldToObjectTransformMatrix() and optixGetObjectToWorldTransformMatrix().
If you look at their actual implementation code inside the OptiX SDK, you see how that walks over the transform list.

I’m using those for motion transforms in my simple examples because the necessary calculations are quite involved and require a matrix inversion function because motion transforms don’t hold that, and I didn’t want to duplicate the code to optimize it for that specific simple use case. It’s more expensive than it would need to be in that example.
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/intro_motion_blur/shaders/closesthit.cu#L72

Anyway, that’s effectively the code you need in your own closesthit program!

You do not need the transposed matrix of the inverse object-to-world transformation. That transpose operation is automatically handled when using the correct transform helper function optixTransformNormal() which expects the inverse matrix (aka. world-to-object) as argument and multiplies with the transpose.
If you only ever have just that one normal to transform, there is also the combined helper function optixTransformNormalFromObjectToWorldSpace(). (I’m not a fan of such convenience functions inside an explicit API because that invites misuse.)

Now, for performance reasons you might not want to use these general purpose Get-TransformMatrix implementations.
For example, if you only use a single transform level (OptixPipelineCompileOptions traversableGraphFlags = OPTIX_TRAVERSABLE_GRAPH_FLAG_ALLOW_SINGLE_LEVEL_INSTANCING) then it’s really simple because there is always exactly one instance transform and it holds the inverse as well.

I’ve implemented my own routines for that case here:
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/rtigo3/shaders/closesthit.cu#L46
https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/rtigo3/shaders/closesthit.cu#L153

1 Like