Optix denoiser and real world RGB image

I’ve got the denoiser working here on my machine. When working with 8it b/w images the resulting denoised image is really amazing. However when I use a 24bit RGB image the resulting image has got almost no difference. Is there a way to play with the internal knobs of Optix to adjust the ‘kernel size’? Or is there any other parameter, which I can use to improve denoising quality to get close to bw results?

Regards,
Franz

You mean you have 8 bit greyscale images vs. 24 bit RGB color images?
There is no difference for the denoiser invocation. Either needs to be remapped to half or float RGB(A) formats anyway to run the denoiser.

However when I use a 24bit RGB image the resulting image has got almost no difference.

You mean no difference between the noisy and the denoised image?
Do you set the OptixDenoiserParams::blendFactor to 0.0 to show the fully denoised image?

If that is all the data you have (no noise free albedo, no camera space normals) then there is nothing else to do.

Related topic:

How can I supply this kind of data? I assume that this is usually a rendering camera and not a real world camera, right?

Correct, I was assuming synthetic images.

The normal buffer also only works in conjunction with the albedo buffer so you would need both.
Means denoiser inputs are either RGB, RGB+albedo, or RGB+albedo+normal.

Hm, I see. How about feeding the output as albedo for the next image to be denoised?
I’ll try that…

Maybe provide some screenshots of the input and output images you want to be improved.


A kind of a view in my office ;-).
I’ve tried already using the output as albedo input. It changes the output somehow, but I get speckles.

Here’s the same view with a b/w image. And the resulting denoise is pretty good!

Is the color image the denoised result?
I was looking for both the original and denoised image to see why you say it’s not changing.

Here’s a denoised RGB:


In more detail (upper part is original, lower part is denoised):

I’ve asked around internally and the denoiser expert ran the image you posted above in comment 7 as non-denoised image through a test program and this was the result:

which means the denoiser should actually work on these images. Even though that was a JPG which is not recommended as input to the denoiser due to the artifacts from the lossy compression. The denoiser expects raw data without any post-processing, as said in the linked post above.

I can’t say what’s happening in your application to not get a similar image with the given information.

What’s your system configuration?
OS version, installed GPU(s), display driver version number (this is mandatory!), OptiX version (major.minor.micro), CUDA toolkit version, host compiler version.

OS is W10 64bit. Driver is 451.67. Optix is 7.1.0. Cuda is 10.1 and 11.0. GeForce RTX 2060.

I guess that the denoised result has to do with the spatial resolution of the image. The input I’ve sent to you was a screenshot. Thus there is already some kind of binning of the noise. When I process a saved image with both my application and the optix sample, I get a noisy output.
Sorry for this, my fault. I have to apologize.

This is why I asked initally whether there is some kind of setting for the ‘kernel size’.
Here’s an image with correct resolution (noisy one)

:
And here’s the denoised one:

The noise distribution is bigger in distance, which probably breaks the denoiser. In case I downsample (by screenshot) the noise is compressed from say 4x4 pixel to 2x2. Is there a way to downsample an image by say 2.5, denoise and upsample it again?
My usecase was not trained with the AI.

Any response by the dev team on the bitmap sent?

I could improve my image by a trick…, but still not the same result as in b/w.