OptiX versions of Ray Tracing "The Next Week" and "The Rest of Your Life"

After reading about Ingo Wald’s port of Peter Shirley’s first minibook(https://twitter.com/IngoWald/status/1063658627372728320), over the last few weeks I decided to learn some OptiX by porting the second and third minibooks on my own, and I reckon it would be interesting to share them here as well. It’s been super interesting and fun! I made two blog posts, one for each book, explaining my reasoning and including some code, pics, links and resources related to each chapter.

“The Next Week”: https://joaovbs96.github.io/optix/2018/12/24/next-week.html

“The Rest Of Your Life”: https://joaovbs96.github.io/optix/2019/01/12/rest-life.html

External Media

Hope this helps someone trying to learn about the API, feedback is appreciated!

I’m also reading Shirley’s book right now. His idea of how to design a pdf for importance sampling inspires me a lot. Ray tracing based on Optix API runs much faster than on cpu. Nice work!

João, this is fantastic, thank your for posting your OptiX adventures!

I agree with dhart, this is great stuff. I read through the first blog but haven’t downloaded and built the code. Your use of callable programs to build shader networks seems spot on.

On the motion blur section: if you want to turn your motion spheres with padded bounding boxes into motion spheres that use the OptiX API to create a motion BVH, that would just require minimal changes on top of what you have. Padded boxes are fine for small to medium motion as in your image, but get pretty slow for extreme motion. Let us know if you try this and have feedback on the API. The optixMotionBlur SDK sample could be a starting point.

On volume rendering, you’re picking a single random point inside the volume in the intersection program, correct? Could this approach be extended to handle nested volumes, or more importantly non-volume surfaces floating inside the volume? You might need to build up some intervals and delay the sampling of the volume until later to handle these more complex cases, rather than returning the sample point directly from the intersection program where you only have local knowledge of the object.

Thanks a lot for the comments, I really appreciate it!

dlacewell, now that I’m finished with the books I’m going back to include, redo and revamp some things like adding proper miss shaders(previously I was only assigning a PRD variable on it indicating that the ray missed), like the environmental mapping described in the OptiX Quick-start tutorial, and triangle meshes(I used some ideas from the sutil OptixMesh.h/triangle_mesh.cu related files).

Changing the moving spheres to make use of the API will most likely be the next step. I will let you know once I’m done doing it!

About the volume approach, I would need to think about it a bit more. Currently, it’s close to what Mr. Shirley does in the second book. I’m still unsure how I would delay the volume sampling but your idea does sound interesting!