This is usually resolved by arbitrarily scaling image intensity before denoising. My approach is to tonemap the image to LDR, then use LDR denoiser. I also tried skipping the tonemap and using HDR denoising, but results are similar. The issue is especially prominent when input data has very high intensities, but sometimes also visible on images with not-so-high dynamic range.
Some interesting bits of information:
Pretty sure I didn’t encounter it before, and I updated my driver recently. Went from 418.81 to 457.30. Not sure if related. Maybe I didn’t notice it before, although I used the denoiser for a few years.
Happens with both OptiX 6.0 and OptiX 7.2. However, doesn’t happen with OptiX 5.1.
I’m on GTX 1060. Same problem was previously reported to me from GTX 20XX users.
In fact the images I denoise are lightmaps. I wonder if it doesn’t play with the (latest?) training dataset? It used to work pretty well.
Images are originally stored as half3 and are converted to float3 before denoising.
BTW, it may be worth noting that the documentation says scaling high intensity images may be necessary to get the best denoising quality. So if the values are large, it is expected that you may need to scale. Also, it sounds like you’re tone mapping after HDR denoising, (and before LDR denoising as a workaround). But just in case I misunderstood, note that you should not attempt to tone map before HDR denoising.
“When using HDR input instead, RGB values in the color buffer should be in a range from zero to 10,000, and on average not too close to zero, to match the built-in model. Images in HDR format can contain single, extremely bright, nonconverted pixels, called fireflies . Using a preprocess pass that corrects drastic under- or over-exposure along with clipping or filtering of fireflies on the HDR image can improve the denoising quality dramatically. Note, however, that no tone-mapping or gamma correction should be performed on HDR data.”
Please have a look at these topics about the OptiX denoiser which explain why the denoiser behavior can change between driver version in OptiX 6 and newer, why the LDR denoiser isn’t really useful anymore, how the HDR intensity calculation works, and what other issues can happen when exceeding the supported value ranges.
Images are originally stored as half3 and are converted to float3 before denoising.
That is unnecessary and should just hurt performance. Use the half3 input directly instead.
Not using optixDenoiserComputeIntensity at the moment and didn’t use half3 directly, because I was directly porting it from 6.0 to 7.2 (and AFAIK both of these were not supported in 6.0). I’m gonna improve the 7.2 version soon using new APIs though.
Values are not extreme, on the last pic the red spot has maximum intensity of 4.
Images are tonemapped before being fed to the denoiser (simple Reinhardt) and then inversely tonemapped after denoising. It gave good results with OptiX 5.1 (and 6.0 on old drivers?). I also tried removing both tonemap/inverse-tonemap and using HDR denoising but didn’t notice much difference.
Quite a lot of different topics. Not sure I got the answer from them.
Anyway, I’ll try to reproduce it on the optixDenoiser sample and report back. Thanks!
didn’t use half3 directly, because I was directly porting it from 6.0 to 7.2 (and AFAIK both of these were not supported in 6.0)
half3 and half4 formats were always supported in all OptiX denoiser versions.
The uchar3 and uchar4 formats are not supported in any OptiX denoiser version.
Quite a lot of different topics. Not sure I got the answer from them.
It’s sad that the LDR network is no longer shipped, and seemingly OptiX 5.1 doesn’t work on 30XX cards anymore, meaning it’s impossible to get the same results I was previously getting on that hardware.
OK, it seems like the results I get with true HDR denoising (no tonemaps involved) is actually quite better than my old version. I’ll use that for now and see if I’ll be able to clearly reproduce it in the future. Thanks for again for the hints!
What’s the use case? Couldn’t you generate better lightmaps with more samples?
What’s the area which is actually denoised?
Are you taking the borders of these images into account for the HDR intensity calculation?
You could try setting the HDR intensity manually instead. It’s meant to improve the denoising in dark areas. Means when the input image is too dark, the denoiser pulls up the values to a more suitable range. If that blows out the bright areas too much you could try to reduce the HDR intensity value.
What’s the random number sampling? Looks like white noise. Maybe try better random number generators.
No, it’s a lightmap. Albedo is not present at all. I can technically supply normals, although in this case it’s just a flat plane, so it would be constant.
I can, of course. It’s just to me the point of using the denoiser is to get away with lower counts :D
And all options (even with all flaws, except for the grid) are still immensely better than the input.
I assume you dilate the light map borders from the actual denoised image area. Means the borders around the light maps are generated by a post-process.
I just wanted to make sure that the HDR intensity calculation is accurate before delving into the possible manual adjustments you could try with the HDR intensity value to influence the results for bright or dark areas.
If the lightmaps are dynamic I would understand the goal to get a way with as few samples as possible, but if they are static, just throw more resources (samples) onto them.
Correct, lightmaps are dilated before denoising. I was hoping it won’t affect the intensity calculation much (at least it can’t affect min/max values of the image).
Anyway, my takeaway (correct me if I’m wrong) is:
Using HDR inputs with precisely defined intensity is important after driver 442.50.
There is no way to get 5.1-like behaviour from 6 or 7 (and no way to run 5.1 on 30XX). Training dataset is different now.
Not sure what triggered the grid-like effect though. I can provide an .exr (which is a tonemapped image) that generates it, if there is any interest in fixing it. Under/over-filtering is something I can expect from the denoiser, but regular patterns look a bit weird.
Actually that is not what I meant. I was assuming you’re rendering the light maps, then denoise them without borders, then dilate them from the denoised image, and place them into your final light map image (atlas?).
Using HDR inputs with precisely defined intensity is important after driver 442.50.
Yes. It’s actually always important to set the HDR intensity since that functionality existed, independent of the display driver version.
Beauty buffer values must be in the range [0.0f, 10,000.0f].
Albedo buffers must be in the range [0.0f, 1.0f].
Normal buffers can be null vector for misses and normalized vectors in camera space otherwise. (Same as an unscaled and unbiased normal map, means in range [-1.0, 1.0])
(When using the LDR mode that uses the same denoiser network now and the values must be in the range [0.0f, 1.0f] and the HDR intensity should not have an effect on the denoiser anymore. It had previously inadvertently, but that has been fixed.)
Use half input formats for better performance when you can.
There is no way to get 5.1-like behaviour from 6 or 7 (and no way to run 5.1 on 30XX). Training dataset is different now.
Yes. Denoiser AI networks and algorithms have changed since OptiX 5.1. Performance is a lot faster as well.