Do ray tracing and deep learning have any overlap?

Hi everyone,

Do ray tracing and deep learning have any overlap?


Many ways. Deep learning can be used to complement ray tracing, “filling in” missing information with plausible interpolated data, such as with NVIDIA’s DLSS (Deep Learning Super Sampling). This works today.

Neural rendering and neural graphics primitives are hot areas of research currently. Last year’s SIGGRAPH course is one place to start. Another good resource is this recent overview of NeRF techniques at CVPR 2022, where ray tracing is used to render radiance fields. NVIDIA has some published work in these areas, see this search.

Is there any potential in using ML to adjust screen-space or GI sampling strategies aside from the volumetric NeRF stuff? Perhaps hooking up lower sampling strategies with the DLSS upscaling? IOW, can we “train” a ML hemispherical sampler that somehow uses geometric and shading data (curvature, reflectivity) as inputs and then generates a set of sampling points on the hemisphere that is significantly better than pseudo-random stratified sampling?

Probably? More seriously, this whole area of research is hot and there are lots of people working, so this sounds promising.

1 Like

The links I gave didn’t transfer, so here you go:
Happy hunting!