timeBegin and timeEnd

OptiX supports motion transformations with matrices which are linearly interpolated (not suitable for rotations) and scale-rotation-translation (SRT) matrices where the rotations are represented as quaternions which can be interpolated correctly.
(Make sure to not rotate 180 degrees or more between two keys. That won’t result in the expected rotation because the spherical interpolation takes the shortest path. Add more keys to keep the angular velocity of your motion below 180 degrees.)

OptiX can build motion acceleration structures (AS) over such motion transforms and also over motion geometry (like for morphing). This will automatically generate axis aligned bounding boxes (AABB) which cover the volume of the moving object(s) inside the OptiX internal AS.
Note that motion AS need more memory! Don’t overdo it with the number of keys inside one AS. Often two is enough.

There must be at least two keys given to a motion traversable and the timeBegin must be less than timeEnd (equal is an error). If there are more keys given to a motion in OptiX, these are evenly spaced.
These begin and end times are user defined, means they don’t necessarily need to be [0.0f, 1.0] like in other APIs. You can pick your own times there, so let’s say you define your motion inside a scene in seconds from the start of the timeline, than you can use these numbers as well, no need to scale and bias time intervals in OptiX.

As I understand kinematics for a moving object, a position/rotation/etc. are all defined at a single point in time and can be accurately interpolated if enough samples are provided

That’s the point. You do not need to provide all “samples” to OptiX. Instead you only describe the motion via transforms or geometry with the minimum necessary number of motion keys (>= 2) and then OptiX calculates the “motion sample” from the given motion information inside the AS for each rayTime automatically and intersects the ray with that.

Now the fun part!

The optixTrace argument rayTime allows to select for which time in your AS the intersections with the geometry should be calculated.

Means since that is per ray(!) you can do interesting things with that, for example:

  • If all rays if are shot at the same time, you can scrub through an animation simply by changing the rayTime.
  • If you pick a different continuous rayTime in each row or column of the image, you effectively implemented a rolling camera shutter with that. (In the second example image below, the green torus rotates and moves downwards while each row inside the image is at a different time from timeBegin at the top to timeEnd at the bottom.)
  • The main feature for that is motion blur though. If you stochastically select a different time for each ray in your image, then you automatically get motion blur when accumulating the results.


The evaluation of the current transformation per rayTime is shown inside the helper functions inside the OptiX SDK 8.0.0\include\internal\optix_device_impl_transformations.h header.

An example using that is my intro_motion_blur program.
Different rayTime per fragment: https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/intro_motion_blur/shaders/raygeneration.cu#L57
Transformation calculation: https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/intro_motion_blur/shaders/closesthit.cu#L72

Please read this OptiX Programming Guide chapter for more information.
https://raytracing-docs.nvidia.com/optix8/guide/index.html#acceleration_structures#motion-blur

Please also read these related threads:
https://forums.developer.nvidia.com/t/how-many-keys-should-be-defined-for-motion-instance-as/264839
https://forums.developer.nvidia.com/t/optix-motion-blur-with-multiple-key-frames/168164