timeBegin and timeEnd

This may be obvious to everyone else, but I have not been able to reconcile why there is a timeBegin and timeEnd when defining scene motion (with either SRT or matrix), as well as why the same values are passed into an optixTrace call. As I understand kinematics for a moving object, a position/rotation/etc. are all defined at a single point in time and can be accurately interpolated if enough samples are provided. Still, these values give the impression that a kinematic state is valid over a non-finite period of time.

OptiX supports motion transformations with matrices which are linearly interpolated (not suitable for rotations) and scale-rotation-translation (SRT) matrices where the rotations are represented as quaternions which can be interpolated correctly.
(Make sure to not rotate 180 degrees or more between two keys. That won’t result in the expected rotation because the spherical interpolation takes the shortest path. Add more keys to keep the angular velocity of your motion below 180 degrees.)

OptiX can build motion acceleration structures (AS) over such motion transforms and also over motion geometry (like for morphing). This will automatically generate axis aligned bounding boxes (AABB) which cover the volume of the moving object(s) inside the OptiX internal AS.
Note that motion AS need more memory! Don’t overdo it with the number of keys inside one AS. Often two is enough.

There must be at least two keys given to a motion traversable and the timeBegin must be less than timeEnd (equal is an error). If there are more keys given to a motion in OptiX, these are evenly spaced.
These begin and end times are user defined, means they don’t necessarily need to be [0.0f, 1.0] like in other APIs. You can pick your own times there, so let’s say you define your motion inside a scene in seconds from the start of the timeline, than you can use these numbers as well, no need to scale and bias time intervals in OptiX.

As I understand kinematics for a moving object, a position/rotation/etc. are all defined at a single point in time and can be accurately interpolated if enough samples are provided

That’s the point. You do not need to provide all “samples” to OptiX. Instead you only describe the motion via transforms or geometry with the minimum necessary number of motion keys (>= 2) and then OptiX calculates the “motion sample” from the given motion information inside the AS for each rayTime automatically and intersects the ray with that.

Now the fun part!

The optixTrace argument rayTime allows to select for which time in your AS the intersections with the geometry should be calculated.

Means since that is per ray(!) you can do interesting things with that, for example:

  • If all rays if are shot at the same time, you can scrub through an animation simply by changing the rayTime.
  • If you pick a different continuous rayTime in each row or column of the image, you effectively implemented a rolling camera shutter with that. (In the second example image below, the green torus rotates and moves downwards while each row inside the image is at a different time from timeBegin at the top to timeEnd at the bottom.)
  • The main feature for that is motion blur though. If you stochastically select a different time for each ray in your image, then you automatically get motion blur when accumulating the results.

The evaluation of the current transformation per rayTime is shown inside the helper functions inside the OptiX SDK 8.0.0\include\internal\optix_device_impl_transformations.h header.

An example using that is my intro_motion_blur program.
Different rayTime per fragment: https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/intro_motion_blur/shaders/raygeneration.cu#L57
Transformation calculation: https://github.com/NVIDIA/OptiX_Apps/blob/master/apps/intro_motion_blur/shaders/closesthit.cu#L72

Please read this OptiX Programming Guide chapter for more information.

Please also read these related threads:

Okay, I’ve read that before, but for some reason, it didn’t click until I read your response.

I am still confused about optixTrace because it takes a tmin and tmax as inputs. If a ray is supposed to intersect the scene at a singular time then why is there a tmin/tmax? Can tmin/tmax be the same value like your first bullet?

Okay, full-disclosure. I was so caught up in tmin/tmax I hadn’t seen the rayTime input. Why are there three inputs for time?

Nooooo, that’s not what these mean. :-)

The interval [tmin, tmax] are not times, they are distances along the ray direction.
That is the intersection test interval along the ray direction in space. These must be positive and should be tmin < tmax. tmin == tmax is a miss.
With each accepted intersection during the ray traversal inside the acceleration structure, the tmax is shrinking and you access the closest intersection distance with optixGetRayTmax inside your closest hit program.

rayTime is another dimension and only used when motion constructs are used inside the scene.

Oh, that is very helpful. Thank you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.