This sample uses the Cluster API available in OptiX 9.0 along with OpenSubdiv Lite. It runs on both Linux and Windows on most GPUs from Ampere onward. On recent hardware, we maintain a high frame rate even while generating millions of new micro-triangles every frame, for scenes like this:
The character here is animated, and we are re-tessellating and rebuilding the entire scene every frame using a camera adaptive tessellation algorithm. In this screenshot we are hitting 70 fps with 32.5M unique micro-triangles on a 4080.
Feel free to post comments and questions here on the forum and please report bugs as GitHub issues. Thanks!
Great repo! The linked tech blog article mentions that one can replace the user-space accumulation loop with DLSS Ray Reconstruction (DLSS-RR). It seems that DLSS should be integrated using the Streamline SDK which only works with DirectX/Vulkan (and only on Windows). Is there an undocumented way to use it (or at least DLSS-RR) in an OptiX program on Linux?
Thanks for the link! Last time I checked the NGX repository, Ray Reconstruction wasn’t available yet. Will try to integrate it with a pure OptiX pipeline but it would be amazing if NVIDIA open-sourced the OptiX implementation shown in this SIGGRAPH presentation. The NVIDIA-RTX/RTXMG repository only showcases DirectX/Vulkan integration.
Hey, sorry for the confusion here. I should have said that even though yes we’re thinking about it, a direct CUDA api for DLSS-RR is NOT currently enabled in public drivers and NGX dlls. I also replied to your post on the other forum.
Thanks for the clarification. I played around with the NGX SDK and modified the NVIDIA-RTX/RTXDI sample to use DLSS RR (Vulkan). My 2c regarding the OptiX integration: