OS is W10 64bit. Driver is 451.67. Optix is 7.1.0. Cuda is 10.1 and 11.0. GeForce RTX 2060.
I guess that the denoised result has to do with the spatial resolution of the image. The input I’ve sent to you was a screenshot. Thus there is already some kind of binning of the noise. When I process a saved image with both my application and the optix sample, I get a noisy output.
Sorry for this, my fault. I have to apologize.
This is why I asked initally whether there is some kind of setting for the ‘kernel size’.
Here’s an image with correct resolution (noisy one)
And here’s the denoised one:
The noise distribution is bigger in distance, which probably breaks the denoiser. In case I downsample (by screenshot) the noise is compressed from say 4x4 pixel to 2x2. Is there a way to downsample an image by say 2.5, denoise and upsample it again?
My usecase was not trained with the AI.