Unfortunately it’s not possible to say what is going on there. This shouldn’t really happen.
Could you please provide the following system configuration information:
OS version, installed GPU(s), VRAM amount, display driver version, OptiX (major.minor.micro) version, CUDA toolkit version (major.minor) used to generate the input PTX, host compiler version.
What exactly do you mean with “the memory controller load is increasing with increasing frame number”.
Is more CPU or GPU memory used the more frames you render? That sounds like a memory leak inside the application then.
Do you use the OptiX C++ wrappers in your application?
Do you change any scene data and forgot to destroy the previous data? The C++ wrappers in the old OptiX API don’t do that automatically.
I assume you’re not using the denoiser per frame but only at the very end at that sub-frame count.
If not, does this also happen when not using the denoiser?
Could you please try looking at the clock rates of the GPU in frequent intervals while the performance gets slower and when running the second slow iteration?
Maybe it got stuck in a low power state.
Assuming Windows OS, the NVIDIA SMI tool is normally installed to
C:\Program Files\NVIDIA Corporation\NVSMI and allows to query that with the command line:
nvidia-smi.exe --query --display=CLOCK
Check the nvidia-smi manual (PDF) in that folder for many more options.
Other than that, there are multiple path tracer examples I’ve written against OptiX 5.1 (which also build under 6.5) and OptiX 7 versions.
Would you be able to verify if these behave correctly to eliminate a system dependency?
(When using the old OptiX Introduction examples with MSVS versions 2017 and newer, please set the CUDA_HOST_COMPILER CMake variable manually. The old FindCUDA.cmake is out of date in that repository.)
Please find links inside the sticky posts of this sub-forum, e.g. here:
I would recommend using OptiX 7 versions for new projects if possible.