I’m writing a path tracer which primarily focuses on exporting a final image rather than updating a viewer which means I do a single launch rather than an iterative as is common in the optix samples. At the moment I’m looping through the sample count per pixel in the raygen program and then I’m writing the average of all samples to the output buffer. Now this feels rather inefficient utilization wise, given the fact that some pixels will have all samples only invocating the miss program while others will have multiple bounces and light samples to trace for all pixel samples.
Therefore I’m thinking that I can use the z dimension of the launch for the sample count rather than looping in the raygen. But I’m unsure of how to then efficiently get the correct color in my output buffer.
I know that when iterating through samples you can just lerp the color that’s already in the buffer with what is coming from that sample like @droettger does in the Optix apps.
if (0 < sysParameter.iterationIndex)
const float4 dst = sysParameter.outputBuffer[index]; // RGBA32F
radiance = lerp(make_float3(dst), radiance, 1.0f / float(sysParameter.iterationIndex + 1));
If I’m firing all at once I would know the sample index but not how many samples that has already written to the buffer. So my question is really if there is a standard way to solve this problem.
The options I’m considering so far would be to either have a separate buffer which holds a counter on how many samples have been written to the output and do some atomic operation when lerping/incrementing the counter. Or I could add all samples with atomic and then do a second pass after the rendering where I divide the output buffer by the sample count.