I would like to implement multi-pass rendering for a path tracer build on the optixPathTracer example. How would i structure a project to achieve this?
diffuse rgb (1 trace closest git)
radiance ( multiple traces closest hit)
alpha ( 1 trace miss)
Would i need different ray_gen programs (also meaning differen pipelines?) or would it work to render the results from the different traces into different buffers (e.g. the miss program should draw into alpha and diffuse channel)? In the second case i guess, i would need a more ellaborate payload for the rays to save the information for multiple channels over multiple bounces. Is there any example for it or can someone give me a good starting point how to accomplish that?
Thanks in advance!
The path tracer implements a progressive Monte Carlo algorithm. If you mean to accumulate these three outputs in the same progressive way, there should be no problem handling the different outputs in a single pipeline and the same ray generation program which would simply accumulate and write the generated data into the individual output buffers for diffuse, radiance and alpha values.
You’re right, that would require to store the respective data to the per-payload inside your closest hit and miss programs and care would need to be taken to only store the results for the interactions you’re interested in. So if the diffuse and alpha values should only be set for the primary ray then you could simply store that only for that to the output buffers inside the ray generation program.
If you do not want to accumulate the diffuse and alpha values but only shoot a single ray for those than that could still be handled by the same pipeline. You could for example simply set a flag inside your launch parameters which indicates what value you want to produce and could special case that inside the ray generation program.
E.g. instead of jittering the fragment sample position over the pixel area you simply generate only one ray through the center of each pixel (offset (0.5, 0.5)). If the alpha should hold a binary decision of the hit or miss, that could be handled in the same launch and the accumulated radiance in many following launches.
You could even reuse the same output buffer, but would need to save that before launching the different launches.
The diffuse and alpha value would be aliased when not accumulating them, which is not only shooting different paths through the scene to approximate the rendering equation for the global illumination, but is also antialiasing the image by jittering the primary ray sample over the pixel area. So if the alpha channel should express the coverage of fine detail or slanted edges of geometry, accumulating that would produce the more accurate results.
That’s usually called Arbitrary Output Variables (AOV) and is used by many renderers implementing those.
This is also used for Light Path Expressions (LPE) where different surface interactions are stored into different channels, e.g. diffuse and specular events separately.
An example writing the radiance and an optional albedo and another optional normal vector for the primary hit to individual output buffers can be found in my intro_denoiser example. Discussed with links in this thread:
Wow, thanks a lot for this detailed answer and the many ideas that you gave! I will implement multiple buffers. Thanks also for the hints on aliasing. I will think about what this means for my use-case. I’m using optix to compute fast approximated simulations of light and sound. The buffers are input for a reinforcement-learning system that modifies aspects of the scene which then re-renders. Therefore as opposed to the classical rendering use-case my renders represent different data channels, rather than an image to be viewed by the human-eye. The anit-aliasing would introduce some “softness” in the data. Thanks also for mentioning of AOV and LPE. Allways something new to learn.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.