# Vertex-Based Motion Blur in OptiX 6.0.0 using GeometryTriangles

Hi,

Yet I successfully implemented motion blur (using motion keys through transforms) for rendering in my engine (a Path Tracer based on the “OptiX Advanced Introduction Samples”), but that only causes motion blur for an entire object.
I also want vertex-related motion blur (so that a bone animation smoothly also produces motion blur based on the relative vertex movings).

The previous vertex buffer (related to current frame) is already present on the GPU and can load the associated vertex (which is related to the current one) from that prevVertex buffer to calculate velocity between them. I can visualize the motion without problems.

But in the attribute program in GeometryTriangles it seems not to be possible to directly apply the velocity, since there is no final vertex position; And changing the barycentrics would be invalid. I could pass veloctiy as an attribute to closest hit. That would provide the info for shading only. But I need to update the vertex position.

Is there a way to apply velocity in GeometryTriangles on the intersection position somehow on the GPU (since the buffers are already there) ? The optixParticles sample uses intersection and bounding box programs. That would be a solution, but then I cannot use GeometryTriangles.

Is my assumption right about the following (which I want to try):
I found rtGeometryTrianglesSetMotionVertices, which can set motion steps. So when there is a current vertex buffer and a previous vertex buffer present and then mixed them both this way:
motionStepCount := 2 (via rtGeometryTrianglesSetMotionSteps)

1. array [prev Vertex * vertex count] with vertexMotionStepByteStride
2. array [curr Vertex * vertex count] with vertexMotionStepByteStride

Would that do the job of creating vertex-based motion blur based on current and previous frame’s vertex buffers ?

The documentation says: […]Triangles are linearly interpolated between motion steps.[…]
Does this mean that the value of rtCurrentTime sets also there the “t” value for the vertex motion time position within rtGeometryTrianglesSetMotionRange ?

Thank you very much.
Any help is appreciated.

my system: Win10PRO 64bit VS2019 Community CUDA 10.0 driver 431.36 OptiX 6.0.0 GTX 1050 2GB

Right, vertex based motion blur (morphing) is possible directly with OptiX as long as the mesh topology doesn’t change, simply by providing the vertex positions for the different key frames as additional data inside the Geometry
https://raytracing-docs.nvidia.com/optix_6_0/guide_6_0/index.html#motion_blur_math#motion-in-geometry-nodes
or GeometryTriangles
https://raytracing-docs.nvidia.com/optix_6_0/guide_6_0/index.html#host#3614

The custom Geometry primitives need a special bounding box program which gets called at least per primitive per key frame.
https://raytracing-docs.nvidia.com/optix_6_0/guide_6_0/index.html#motion_blur_math#bounding-boxes-for-motion-blur

For GeometryTriangles look for the entry points with “Motion” in the name.
API Reference Guide: https://raytracing-docs.nvidia.com/optix_6_0/api_6_0/html/group___geometry_triangles.html
In particular the rtGeometryTrianglesSetMotionVertices you already found:
https://raytracing-docs.nvidia.com/optix_6_0/api_6_0/html/group___geometry_triangles.html#gac9c60bfb06bb0ff39c5a84e17e4573f5

OptiX will then interpolate the positions according to the current time and if you have a renderer which already supports SRT motion via transforms, nothing else needs to be changed. It should simply work when providing the additional morphed mesh data per key frames.

Thanx for the clarifications. Very much appreciated.

I reached a conclusion using setMotionVerticesMultiBuffer which even can take the 2 vertex buffers without mixing them into one.
Those buffers are already optix::Buffers on the GPU.
Was much easier than I thought. And I already got it work in a small test renderer app. Great!