Motion blur in parallel for physics simulation

Hi!

I am working on upgrading our physics simulator from optix 6 to optix 7, and want to at the same time improve some parts of the simulator. I have been thinking about how to improve performance when we simulate over time, and have an idea on my own, but want to hear if you have some other suggestions on how to efficiently do the simulation.

Background:
My 3D model consists of a few hundred thousand triangles, and I am shooting a few billion rays in the model for each time sample. The rays do not start any new rays, there is no reflection or shadowing happening in the ray launching programs, that is currently handled outside the GPUs.

The difference between two time samples is minimal, maybe a few hundred triangles may move. So the vast majority of the rays will be unaffected.

My plan was to copy the 3D models that move, and create multiple instances of them - one for each time sample, and then during the any_hit call keep track for which time instance this exact version of the triangle is valid, and keep track of this in memory. I.e., not using the built in motion blur at all.

However, I want to hear if you think this is a good approach. Is there any way to easily detect which rays will be affected by the motion, and maybe launch just these rays for all time samples. Can I easily check if a ray will intersect with the moving objects bounding box, and if so split the ray up and shoot it for all wanted time samples. That would simplify the setup for me since I wouldn’t need to create copies of the moving objects…

Hi mbglo4q,
great to hear you’re upgrading to Optix 7! Your idea for parallel motion blur sounds interesting. Given the background info you provided, I can see nothing wrong with your approach for optimization. But maybe there are other solutions. Do I understand correctly, that currently the billions of rays are launched for each time step, and because only a tiny fraction hit “moved triangles” all the others just redundantly recompute the results from the previous time-step? That is the ray geometry does not change with time, only (some) triangles?

A couple more questions:

  • Are you currently using closest-hit programs to resolve visibility, or are all rays processing all intersections, i.e. rely on the any-hit programs to compute their results?
  • How are you “moving the triangles”? Are you rebuilding acceleration structures at each time step? Or are you using built-in motion, i.e. the motion geometry acceleration structure?
  • With billions of rays, do you consolidate per-ray results into some kind of aggregated result in the ray-gen program?

If avoiding redundant work from rays shot against the static geometry is the central goal, would it be an option to use the visibilityMask to make the static geometry “invisible” to rays that should only process “moved triangles”? This strategy relies on the fact that launching threads/rays itself is cheap and that “missing” is fast, i.e. a ray that doesn’t trigger an anyhit or closest hit program terminates very quickly.