The OptiX denoiser network training needs a few inputs that will make it difficult to train using photo or video data.
The training data contains albedo and normal maps, which help the network identify surfaces that should appear smooth and noise-free. The OptiX API also allows optional albedo and normal maps to be used during inferencing.
The training data also needs noisy images (low samples per pixel) to be paired with resolved images (very high samples per pixel).
Both of those kinds of inputs will be difficult to attain in the case of photographic sensor data, but aside from that there’s no requirement to train or use the denoiser on 3d renderings specifically. It might be possible to fake the albedo & normal maps in the training, or to use empty inputs for them. I haven’t tried that, it might reduce the denoiser’s effectiveness, or it might not work at all. For the noisy/resolved image pairs, you’d need to make sure the images are aligned exactly or the network will encode more than noise properties and probably learn to warp your input image.