Optix7 denoiser: different results with different version of drivers

Greetings, I’ve been implementing CUDA 10.2 + Optix7 pahtracing program that also uses Optix7 denoiser and I’ve come along the following problem:
each version of drivers I have tried so far produces different denoiser results. I first noticed this when I deployed code from Windows developer machine to Linux server machine, however after that I tried reinstalling couple of versions of drivers on the Windows machine and the results were still different. Regardless if I used HDR or SDR or if I left hdrIntensity = 0, or calculated it with optixDenoiserComputeIntensity the results differed every time between different drivers - while the results without denoising were always identical.

Is this expected to happen with different versions of drivers? If so is the difference between drivers caused by different default model passed to the optixDenoiserSetModel when data == NULL? And if so, is there some way how to store / extract the model used for one particular version of drivers so that I could pass it always to the optixDenoiserSetModel? Since in my application results of the pathtracing + denoising are used as an input to another calculation I would really prefer to have the results consistent / stable regardless of the version of drivers used if possible.

For the reference: input to the denoiser consist of grayscale images (R = G = B), and all channels are normalized to 0 … 1 interval
(0 = minimum irradiation, 1 = maximu irradiation - as my application actually works only with irradiation and not with separate color channels)

And version of drivers where it was clearly possible to see differences in results on Windows were for example: 440.97 DCH vs. 445.75 DCH

Is this expected to happen with different versions of drivers?

Yes, the OptiX AI denoiser lives in the driver since OptiX 6.5.0.

If so is the difference between drivers caused by different default model passed to the optixDenoiserSetModel when data == NULL?

Yes, the AI denoiser can have differently trained networks or algorithms in different drivers with the goal to improve the results and performance.

And if so, is there some way how to store / extract the model used for one particular version of drivers so that I could pass it always to the optixDenoiserSetModel?

No, there is only one network at a time. It’s pretty big already, There actually isn’t even a separate training network for the LDR mode anymore to make it smaller.

And version of drivers where it was clearly possible to see differences in results on Windows were for example: 440.97 DCH vs. 445.75 DCH

Right, the R440 driver branch up to version 442.50 contained a different training network which was improved in that release and later versions. The drivers 442.50 and newer should behave the same so far, but this can change in future versions.