Basic question on how SRT transformation motion works in OptiX 7

Dear all,

Could you help me to understand how exactly SRT transformation motion works in OptiX 7.4?
Consider the following example in which I build IAS->SRT->GAS, where SRT implements a translation along X at 0.25 (a simplified version of optixSimpleMotionBlur with the SRT applied to a triangle instead of a sphere):

// A single-triangle mesh
const std::array<float3, 3> vertices =
{ { {  1.0f, -1.0f, -1.0f }, {  0.0f, -1.0f,  1.0f }, { -1.0f, -1.0f, -1.0f } } };
CUdeviceptr d_vertices=0; // Copy the mesh to the device memory
...
// Build GAS, save handle to OptixTraversableHandle m_gas_handle
OptixAccelBuildOptions accelOptions = {};
accelOptions.buildFlags = OPTIX_BUILD_FLAG_ALLOW_RANDOM_VERTEX_ACCESS | OPTIX_BUILD_FLAG_ALLOW_COMPACTION;
...
// Prepare SRT motion transform as a parent node to GAS
OptixSRTData srt_data[2] = 
{
    //sx,   a,   b, pvx,  sy,   c, pvy,  sz, pvz,  qx,  qy,  qz,  qw,    tx,  ty,  tz
    {1.f, 0.f, 0.f, 0.f, 1.f, 0.f, 0.f, 1.f, 0.f, 0.f, 0.f, 0.f, 1.f,   0.f, 0.f, 0.f},
    {1.f, 0.f, 0.f, 0.f, 1.f, 0.f, 0.f, 1.f, 0.f, 0.f, 0.f, 0.f, 1.f, 0.25f, 0.f, 0.f}
};
OptixSRTMotionTransform motion_transform = {};
motion_transform.child = m_gas_handle; // GAS
motion_transform.motionOptions.numKeys   = 2;
motion_transform.motionOptions.timeBegin = 0.f;
motion_transform.motionOptions.timeEnd   = 1.f;
memcpy( motion_transform.srtData, srt_data, 2 * 16 * sizeof( float ) );
...
// Prepare IAS as a parent node to motion transform
const float static_transform[12] = {1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0 };
const size_t instance_size_in_bytes = sizeof( OptixInstance ) * 1;
OptixInstance optix_instances[ 1 ];
memset( optix_instances, 0, instance_size_in_bytes );

optix_instances[0].flags             = OPTIX_INSTANCE_FLAG_NONE;
optix_instances[0].instanceId        = 0;
optix_instances[0].sbtOffset         = 0;
optix_instances[0].visibilityMask    = 1;
optix_instances[0].traversableHandle = m_motion_transform_handle;
memcpy( optix_instances[0].transform, static_transform, sizeof( float ) * 12 );
// Copy instances to the device memory
CUdeviceptr  d_instances;
...
// Build IAS
OptixBuildInput instance_input = {};
instance_input.type                       = OPTIX_BUILD_INPUT_TYPE_INSTANCES;
instance_input.instanceArray.instances    = d_instances;
instance_input.instanceArray.numInstances = 1;
OptixAccelBuildOptions accel_options = {};
accel_options.buildFlags              = OPTIX_BUILD_FLAG_NONE;
accel_options.operation               = OPTIX_BUILD_OPERATION_BUILD;
...

I cast one ray with ray_time=1.0 and direction (0, 1, 0), and I want to find the intersection point in the closest hit program:

extern "C" __global__ void __closesthit__project()
{
    float3 triangle[3];
    optixGetTriangleVertexData(
        optixGetGASTraversableHandle(),
        optixGetPrimitiveIndex(),
        optixGetSbtGASIndex(),
        optixGetRayTime(),
        &triangle[0]
    );
    const float2 barycentrics = optixGetTriangleBarycentrics();
    float3 hit_point_bary = triangle[0]*(1-barycentrics.x-barycentrics.y) +
                            triangle[1]*barycentrics.x +
                            triangle[2]*barycentrics.y;
    float3 hit_point_tmax = optixGetWorldRayOrigin() + scale(optixGetRayTmax(), optixGetWorldRayDirection());
    // Print the triangle and the hit points
    ...
}

The printout for the triangle variable is (1.0, -1.0, -1.0), (0.0, -1.0, 1.0), (-1.0, -1.0, -1.0), for the hit_point_bary it’s (-0.25, -1.0, 0.0), and for the hit_point_tmax it’s (0.0, -1.0, 0.0).

  • Is it correct that hit_point_bary is in the object coordinates, while hit_point_tmax is in the world coordinates?
  • How do I get the hit point in world coordinates using the barycentric coordinates if the triangle is not transformed?

In programming guide, I read

The motion matrix transform traversable (OptixMatrixMotionTransform) transforms the ray during traversal using a motion matrix.

  • How exactly is the ray being transformed? Does this imply that the barycentric coordinates are actually transformed?

I feel it’s a very basic question on motion, but still I fail to understand it…

Thanks a lot in advance!

Kind regards,
Pavel

Hi @PPavel,

Barycentric coordinates are transform independent, they describe a relationship to the triangle’s vertices, so for example the barycentrics don’t change if you transform the ray & geometry together.

Your triangle vertices returned by optixGetTriangleVertexData() are in object space (it returns the same data you passed in to your GAS build, unmodified). If you want to find the world space hit point using barycentrics, then you can query the vertex data in object space, and then transform them into world space using optixGetObjectToWorldTransformMatrix() or optixTransformPointFromObjectToWorldSpace(). Once you have the world space vertices, you can use the barycentrics directly from optixGetTriangleBarycentrics() to recover the world space hit point.

Yes, hit_point_tmax is in world space units, so reconstructing the hit point using the world ray origin, direction, and tmax should typically give you the same hit point as the method using barycentrics, to within some small epsilon of precision. If your geometry is far away from the camera (ray origin) then barycentrics can provide better precision at the slightly higher cost of querying the triangle vertices and object->world transform, and applying the transform.

How exactly is the ray being transformed? Does this imply that the barycentric coordinates are actually transformed?

The programming guide is referring to the standard method of ray tracing instances, where the ray origin & direction are transformed from world space into object space before the intersection test. This does not imply that barycentrics are transformed, since barycentrics are space independent, and also because the barycentric coordinates of the hit point are only known after the intersection test, they are an output of the intersector.

BTW, nothing here is specific to the SRT transform type. This all applies to any transform OptiX supports, SRT, static, and motion matrix. Querying the object->world matrix should work in all cases. Some of this may be a little easier to see by perusing through the header files directly optix_7_device.h & optix_7_types.h, as a supplement to what’s in the Programming Guide.


David.

1 Like

Hi David,

That was a perfect explanation, thank you!

Pavel

1 Like

For another example using motion transforms, have a look into the intro_motion_blur example which is using linear and SRT transform matrices and also shows camera motion blur which just affects the primary ray calculation.

This code inside the closest hit program shows how to transform the object space vertex attributes into world space resp. doing the inverse transpose on the normal vectors: https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/intro_motion_blur/shaders/closesthit.cu#L72
That is using the helper functions optix_impl::optixGetObjectToWorldTransformMatrix() resp. optix_impl::optixGetWorldToObjectTransformMatrix() from the header OptiX SDK 7.4.0\include\internal\optix_7_device_impl_transformations.h which handle arbitrary transform lists including motion transforms, which is a little involved.

1 Like