I’m using the Optix denoiser in our OpenGL render application, as the last step after ray-tracing. The data format we have is 8-bit unsigned char. As stated in Optix 7.3 release note, this format isn’t directly supported by the denoiser, so I’m manually converting the image to float by dividing 255. So the input data range is 0.0 to 1.0.
However on retrieving the output buffer, I realize the data range has been enlarged to beyond 1.0. For some test image, the range seems to be 0.0 to 2.0. I need to convert this date back to unsigned char. If there is a way to find out the min and max of the output buffer, then I can simply do this cast the output buffer like this: (unsigned char)(255 * (image_pixels[i]-min)/(max-min)).
Can you instruct me how to find out the output buffer’s expected data range? Or is there another way to do this conversion from float to unsigned char on the output buffer?
Thanks very much!