I’ve got the denoiser working here on my machine. When working with 8it b/w images the resulting denoised image is really amazing. However when I use a 24bit RGB image the resulting image has got almost no difference. Is there a way to play with the internal knobs of Optix to adjust the ‘kernel size’? Or is there any other parameter, which I can use to improve denoising quality to get close to bw results?
You mean you have 8 bit greyscale images vs. 24 bit RGB color images?
There is no difference for the denoiser invocation. Either needs to be remapped to half or float RGB(A) formats anyway to run the denoiser.
However when I use a 24bit RGB image the resulting image has got almost no difference.
You mean no difference between the noisy and the denoised image?
Do you set the OptixDenoiserParams::blendFactor to 0.0 to show the fully denoised image?
If that is all the data you have (no noise free albedo, no camera space normals) then there is nothing else to do.
The normal buffer also only works in conjunction with the albedo buffer so you would need both.
Means denoiser inputs are either RGB, RGB+albedo, or RGB+albedo+normal.
I’ve asked around internally and the denoiser expert ran the image you posted above in comment 7 as non-denoised image through a test program and this was the result:
which means the denoiser should actually work on these images. Even though that was a JPG which is not recommended as input to the denoiser due to the artifacts from the lossy compression. The denoiser expects raw data without any post-processing, as said in the linked post above.
I can’t say what’s happening in your application to not get a similar image with the given information.
What’s your system configuration?
OS version, installed GPU(s), display driver version number (this is mandatory!), OptiX version (major.minor.micro), CUDA toolkit version, host compiler version.
OS is W10 64bit. Driver is 451.67. Optix is 7.1.0. Cuda is 10.1 and 11.0. GeForce RTX 2060.
I guess that the denoised result has to do with the spatial resolution of the image. The input I’ve sent to you was a screenshot. Thus there is already some kind of binning of the noise. When I process a saved image with both my application and the optix sample, I get a noisy output.
Sorry for this, my fault. I have to apologize.
This is why I asked initally whether there is some kind of setting for the ‘kernel size’.
Here’s an image with correct resolution (noisy one)
The noise distribution is bigger in distance, which probably breaks the denoiser. In case I downsample (by screenshot) the noise is compressed from say 4x4 pixel to 2x2. Is there a way to downsample an image by say 2.5, denoise and upsample it again?
My usecase was not trained with the AI.
You would loose a significant information since you’re are doing a weighted mean between several pixels by downsampling, which will reduce noise. There is no interest in downsampling then denoising (again, with the Optix denoiser) then resampling.
You have no (zero) control over the Optix denoiser. If you want more control (tune the expected noise, its variance, shape etc) you want to use some BM3D variant or NCSR.
[1] Mäkinen, Ymir, Lucio Azzari, and Alessandro Foi. “Exact transform-domain noise variance for collaborative filtering of stationary correlated noise.” 2019 IEEE International Conference on Image Processing (ICIP) . IEEE, 2019.
[2] Weisheng Dong, Lei Zhang, Guangming Shi, and Xin Li.,“Nonlocally
centralized sparse representation for image restoration”, IEEE Trans. on
Image Processing, vol. 22, no. 4, pp. 1620-1630, Apr. 2013.