Hello and happy new year,
I am building an application which fuses 360^o media with 3D content.
The 3D content is an animated mesh (Linear blend skinning animation) which is a single mesh (i.e. single geometry - not divided in different geometries according to skeleton bones and thus I cannot animate the mesh from many transform nodes).
The mesh is a captured performance which is produced by a 3D reconstruction pipeline described in:
“Alexiadis, D.S., Zioulis, N., Zarpalas, D. and Daras, P., 2018. Fast deformable model-based human performance capture and FVV using consumer-grade RGB-D sensors. Pattern Recognition, 79, pp.260-278.”
In my understanding it would be somewhat difficult to divide the mesh in more w.r.t. the skinning weights of each vertex (without “breaking” triangles), which I tried without nice results.
So I’d like to ask if it is possible to implement the animation in device code, in the intersection program, i.e. transforming vertices (and consequently normals) when reading them from the vertex buffer.
Would a scenario like that be possible and what will be the consequences on the Acceleration Structure? Will I need to rebuild it on each animation frame?
A sample of the application I am working on can be seen here:
thank you in advance