I’m looking to implement the “representative point method” described here for specular lighting of area lights:
I’m new to optix and just wanted a sanity check if my approach is the right way:
I was going to store the area lights in a geometry group, say “lights_top”. Then when my camera rays hit something, I would recurse with another rtTrace against the lights_top group with the reflected ray. Then write the Hit/Miss programs for the reflected light ray to get the representative point.
Or would it be better to loop over the area lights instead of recursing?
Why would you want to apply rasterization tricks in a ray tracer which can handle that more directly?
If you’re new to OptiX, I would recommend to watch and read my GTC 2018 presentation “An Introduction to NVIDIA OptiX” and work through the accompanying open-source examples.
[url]https://devtalk.nvidia.com/default/topic/998546/optix/optix-advanced-samples-on-github/[/url]
That also shows how to implement sampling and evaluation of area lights, in that case a parallelogram, inside a progressive global illumination uni-directional path tracer. But the explicit light sampling itself is applicable for any algorithm. Specular reflections do not sample the light (no direct lighting), they either implicitly hit a light with the continuation ray or they don’t.
Thanks for the reply. Regarding “Specular reflections do not sample the light (no direct lighting), they either implicitly hit a light with the continuation ray or they don’t.”
How do you handle surface roughness? I looked at the code and it seems to just set the light emission when a ray hits light geometry:
thePrd.radiance = light.emission;
Or do you shoot out multiple rays in the direction of the specular lobe and average the results? That seems expensive but maybe I’ll try it.
The OptiX Introduction examples implement a progressive uni-directional iterative path tracer which handles global illumination. Non-specular surfaces need many rays to integrate the illumination above the hitpoint.
That is the ground truth implementation, not a real-time trick and not done in a single launch.
I did not include a glossy microfacet material in that example for brevity, but the principle is the same in the diffuse reflection BSDF (Lambert).
If your goal is to emulate that behavior inside a real-time approach, then you’re back at the rasterization tricks which can only be approximations of the exact solution.
At least you would have a comparison against the ground truth when implementing the same BSDF inside the path tracer. That just needs the sampling and evaluation functions added.