Greetings, I’ve been implementing CUDA 10.2 + Optix7 pahtracing program that also uses Optix7 denoiser and I’ve come along the following problem:
each version of drivers I have tried so far produces different denoiser results. I first noticed this when I deployed code from Windows developer machine to Linux server machine, however after that I tried reinstalling couple of versions of drivers on the Windows machine and the results were still different. Regardless if I used HDR or SDR or if I left hdrIntensity = 0, or calculated it with optixDenoiserComputeIntensity the results differed every time between different drivers - while the results without denoising were always identical.
Is this expected to happen with different versions of drivers? If so is the difference between drivers caused by different default model passed to the optixDenoiserSetModel when data == NULL? And if so, is there some way how to store / extract the model used for one particular version of drivers so that I could pass it always to the optixDenoiserSetModel? Since in my application results of the pathtracing + denoising are used as an input to another calculation I would really prefer to have the results consistent / stable regardless of the version of drivers used if possible.
For the reference: input to the denoiser consist of grayscale images (R = G = B), and all channels are normalized to 0 … 1 interval
(0 = minimum irradiation, 1 = maximu irradiation - as my application actually works only with irradiation and not with separate color channels)
And version of drivers where it was clearly possible to see differences in results on Windows were for example: 440.97 DCH vs. 445.75 DCH