Optix denoiser output buffer data range

I’m using the Optix denoiser in our OpenGL render application, as the last step after ray-tracing. The data format we have is 8-bit unsigned char. As stated in Optix 7.3 release note, this format isn’t directly supported by the denoiser, so I’m manually converting the image to float by dividing 255. So the input data range is 0.0 to 1.0.

However on retrieving the output buffer, I realize the data range has been enlarged to beyond 1.0. For some test image, the range seems to be 0.0 to 2.0. I need to convert this date back to unsigned char. If there is a way to find out the min and max of the output buffer, then I can simply do this cast the output buffer like this: (unsigned char)(255 * (image_pixels[i]-min)/(max-min)).

Can you instruct me how to find out the output buffer’s expected data range? Or is there another way to do this conversion from float to unsigned char on the output buffer?

Thanks very much!

Can you instruct me how to find out the output buffer’s expected data range?

There is no such functionality to calculate the minimum or maximum value inside a color buffer inside the OptiX denoiser.
It has entry point functions calculating the HDR intensity and average color only, which are needed to produce better results especially for very dark or very bright inputs.

Which denoiser mode did you use? (LDR, HDR, AOV)
https://raytracing-docs.nvidia.com/optix7/guide/index.html#ai_denoiser#nvidia-ai-denoiser

As stated in Optix 7.3 release note, this format isn’t directly supported by the denoiser, so I’m manually converting the image to float by dividing 255

Right. You mean float 32-bit? It’s recommended to use half 16-bit instead for better performance.

Please always provide the following system configuration information when asking about OptiX issues:
OS version, installed GPU(s), VRAM amount, display driver version, OptiX (major.minor.micro) version, CUDA toolkit version (major.minor) used to generate the input PTX, host compiler version.