OptiX denoiser with non-optix rendered output

I am attempting to incorporate the OptiX denoiser with a non-OptiX based renderer.

I’ve modified the optix_10 introduction example, by removing all of the raygeneration.cu kernel code, reading in the image and mapping it to the m_bufferOutput global variable which acts as input to the denoiser, and then continuing on with the same code as in the examples to denoise.

I’ve used the utility functions that come with the examples to output the buffers to png. I’ve outputted the both the non-denoised buffers and the denoised one, and the images are identical. Given that the denoised buffer is being filled up with the image, I would think that that I’m skipping some step and the denoising step is not being applied (although given that the buffer has been filled, this seems odd), so I’ve formulated these questions:

Although I’m simply assigned the denoiser input buffer the image I need, is there anything else I would need to do for the denoiser to work on non-Optix generated data?
Given that the image I’m providing is not a HDR image, would I need to set the denoise hdr flag to 0 to, or would it not be an issue for it to be enabled but given a LDR image?

In the examples, you’re blending the denoised image with the original image, is there a benefit to this?

Thank you very much!

Here’s a very minimal example from another user doing the same in this post:
[url]https://devtalk.nvidia.com/default/topic/1036145/optix/optix-denoiser-exceptions-at-certain-buffer-sizes/[/url]

Use OptiX 5.1.0, remove the appendLaunch() call from the command list there, get your noisy buffer into the input buffer, execute the command list, and store what’s in the output buffer.

Either not declaring the hdr variable on the DenoiserStage at all or declaring it as unsigned int and setting it to 0 would use the LDR training data for the denoiser.

If you’re using the LDR denoiser, the built-in training data for that expects a gamma corrected image and values must be in the range [0, 10], otherwise you will get color corruption around brighter areas.
When using the HDR denoiser, you wouldn’t need to do the tone mapping and gamma correction with linear HDR input images.

If your input and output buffers are the same, make sure to set the “blend” variable to 0.0f.

In my examples, the default blend is set to show the fully denoised image. I put in the “Blend” GUI variable to be able to show original and denoised images. The “Frames” variable allows to limit the number of samples in the noisy image to see how the denoiser works with differently refined images.
It’s not actually filtering firefiles which will result in rather big sparkles in the early frames.

Other than that, the denoiser has been taught with images generated with Iray and the effectiveness of the denoiser can be influenced by the type of random noise. Another case where I wouldn’t expect optimal results would be spectral noise.

Here are some more useful links about that topic. Read the OptiX programming guide link in that.
[url]https://devtalk.nvidia.com/default/topic/1036160/?comment=5263638[/url]