I am working with animations of animation meshes limited by the same/a similar bounding box and with preserved topology (i.e., vertex count, half-edge structure). The animation consists of over eight and maximally 30 key frames. The results from the ray tracing are processed and viewed in real time, i.e. computation time is critical and an uploading at every step currently out of question. I have a low number of rays, less than 10000, and a high ray depth, up to 60, due to the transparency of the objects in the scene.
For this setting, using selectors was perfect in the past - almost no impact on the frame rate. Now changing towards optix 6 with the rtx mode and thus deprecated selectors, I need to find a proper replacement.
https://devtalk.nvidia.com/default/topic/1055044/optix/update-to-optix-6-0-from-5-0-1-crashes-with-canonicalstate-still-used-in-function/post/5348464/#5348464. suggests using RTrayflags which allow behaviour changes per ray:
https://raytracing-docs.nvidia.com/optix_6_0/api_6_0/html/optix__declarations_8h.html#ab847419fd18642c5edc35b668df6f67d. To my understanding, this does not help for animations, as we need to switch off some of the geometries and not the rays.
I have more than 8 frames, so the visibility masks do not work without changing the context.
https://devtalk.nvidia.com/default/topic/1056070/optix/optix-raise-error-when-finding-child-node-of-transform/post/5354109/#5354109 proposes using optixDynamicGeometry. This is the part where I could find the less information online. I have the impression, that it is difficult to use in my context, since the change of position in the animated key frames can be almost random. Please let me know if you see potential there.
I looked into the motion blur sample and the documentation online, which was mentioned in https://devtalk.nvidia.com/default/topic/1043459/acceleration-structure-memory-consumption-/?offset=1#5293341. With the linear interpolation, the key frames could potentially be reduced to four or eight. My first goal would be, to change between two key frames without any (motion) blur, i.e. first showing the first and then the second key frame, potentially with one unblurred interpolated frame in between. To my understanding, the change should be possible either via the motion range (i.e. the time interval of the object movement) or via the current time that is propagated using
rtTrace(top_object, ray, TIME, prd);
in pinhole_camera.cu or in accum_camera_mblur.cu. I could only see the effect of the motion range on the bluriness of the image, but no clear change between two key frames. Could you help me how I can make a step in the right direction?
In both posts above changing the tree is mentioned. Similar to that idea I believe one could circumvent using selectors by defining a geometry animation_boundary that surrounds the key frames. The structure would be the following:
optix::Group animation_boundary = context->createGroup(); // Set the acceleration of the animation boundary // Set the geometry of the animation boundary top_object→addChild(animation_boundary); ... animation_boundary->setChild(key_frame_i); ...
One could forward the rtObject rt_object_i of every key_frame_i from the host to the device
where an additional variable helps to decide which frame should be used.
I like this approach and with my current knowledge it would be the one I would implement, since no recomputation of the acceleration structures should be necessary. But I do not know how to properly forward the rtObject - I only see the way of hardcoding e.g. 30 key frames.
Is it possible to use getChild of an rtObject (similar to the group on the host side) as well on the device side?
Can I forward a vector of rtObjects? Or, to formulate it differently: how did you decide about the use case as response to: https://devtalk.nvidia.com/default/topic/1044745/optix/how-to-pass-a-buffer-of-graph-nodes-to-optix-/post/5301685/#5301685.
I am open for other ideas to approach the animation, please let me know if you need more information!