I’m interested in using the OptiX progressive API in generating HDR images. However, there are a few built-in aspects of the progressive API that feel like they’re intended for low-dynamic range display. I’m wondering if it would be possible to work around these somehow.
First question: Does the compositing happen before or after the data is tone mapped? If the compositing is performed on the original HDR data, is there any way I can change how it’s done? For instance, store a sum instead of an average?
Second, can I change the tonemap operator? The documentation says the operator is:
final_value = clamp( pow( hdr_value, 1/gamma ), 0, 1 )
But the clamp operation removes a lot of data, since I could have HDR values much greater than 1. In order to preserve that information, I’d like to use falsecolor or some other tonemap. If the tonemap operation is performed after compositing, I might even want the option to switch tonemaps without restarting the progressive launch (though I guess I could also save and reuse the original output buffer for this, if I could change the compositing function).
Alternately, does anyone know of a strategy for directly encoding data into the stream buffer so that I can interpret it later? For instance, if I want to transfer back a set of floating point values to the CPU, could I read them as bit strings into the stream buffer? It loses the compression, I know, but I can give up a little speed for this.
And third question: Is there a limit to the number of stream buffers I can create from one launch? Could a ray generation program write to multiple output buffers that each had an associated stream buffer?