Path Tracing with Primary Rays (NVRTX Unreal Engine 5.4)

Hello,

I’d like to start using the nvrtx branch of UE 5.4 after seeing the results from the real-time path tracing demos. However, I would like clarifications on how they were run. Is it specifically the lighting that is real-time path traced, or is it using Unreal’s offline path tracer combined with real-time denoising?

Does this mean that everything must still be rasterized first? I would like to make use of custom DXR intersection shaders like it seems NVRTX is doing with hair strands. Furthermore, what passes must be fed to DLSS Ray Reconstruction to work properly? Even when using a custom rendering engine it would be good to know what information must be rendered for DLSS-RR, and whether this means DLSS-RR will replace all denoisers entirely.

Hi there @thomasluvan and welcome to the NVIDIA developer forums.

I think you might have combined a few things here that do not belong together.

  • Path-Tracing: This is only available currently in the Path-Tracing SDK. It is not part of the NvRTX UE fork.The FAQ states

Q: Can I use RTXPT in Unreal Engine or Unity?
A: At launch, you will not be able to run the RTX Path Tracing in Unreal or Unity. However, we are investigating solutions to make the technology available in those engines.

  • Path-traced Lighting: You probably mean the RTXDI demos. That is part of NvRTX and does not use the Unreal offline path tracer but is rather based on ReSTIR algorithm. See the documentation as well.
  • DLSS Ray Reconstruction: This is not part of NvRTX yet. You can sign up to get notified though. Currently it is only used with select developer studios.

I hope that helps.

Thanks for answering. I’ve done a lot more research since this post and looked into the source code so I understand. I know that DLSS Ray Reconstruction is not available to developers yet. However, it would still be nice to know for example, what things will need to be output to the G-buffer to make use of it. I imagine it will be similar to regular DLSS in which case it will need depth buffer, motion vectors and previous frames. But will it also need specific radiance/lighting data that normally gets mapped during raytracing?

Again, any advice is appreciated! I would like to make use of DLSS-RR in my custom renderer so I would like it to be ready by the time its released to the public.