Which method is used for OptiX Denoiser's Upscaling 2X?

I have implemented OPTIX_DENOISER_MODEL_KIND_UPSCALE2X and OPTIX_DENOISER_MODEL_KIND_TEMPORAL_UPSCALE2X in my pathtracer app (using OptiX 7.6.0).
And I read the programming guide and the API reference, but did not find any answer there.
I’m curious, how the upscaling actually is achieved. Is it part of the OptiX AI-denoising ?
It cannot be DLSS, cause that would not run on my Pascal GTX board.
Or is it using NVIDIAImageScaling (which is part of the DLSS v3.1.0 SDK) ?
Or a derivate of it?

Thank you!

Is it part of the OptiX AI-denoising ?

Yes, all the OptiX denoiser modes are implemented as AI networks inside the OptiX driver module with GPU dependent inference kernels making use of the underlying hardware capabilities like Tensor cores.

The current Ada RTX boards with the third generation RT cores and fourth generation Tensor cores would provide a much higher ray tracing and denoiser performance than your entry-level Pascal board. You’re really missing out on what’s possible today.

The OptiX denoiser gets continually improved across display driver versions, usually most with new driver branches, so it’s always recommended to try newer display drivers as well as the newest OptiX SDK if denoiser quality and performance is a concern.

1 Like