When running a shipped sample project and set it to DLAA mode, I see that built-in TSR antialiasing is also used. r.AntiAliasingMethod returns 4, which is for TSR. More to say, it is actually forced via provided widget blueprint to be TSR.
This topic saying TSR should be disabled: Unreal with DLSS 3.5 - Should TSR be enabled
Setting built-in AA to None with r.AntiAliasingMethod=0. DLAA now acts alone, which leads to ugly result:
So I’m curious, if DLAA (NVIDIA Deep Learning Anti-Aliasing) is an antialiasing solution by itself, why it requires use of another one, additional AA like TSR on top? Or what is the right way to use DLAA? Or do I miss something?
Hello @zvsmailrelay, welcome to the NVIDIA developer forums.
It seems that my information from the other thread is either outdated or wrong. I just tried this myself and I can confirm the visual difference.
I’ll ask for clarification asap and follow up here with the correct explanation soon.
Thanks for pointing this out!
Thanks for your prompt reply, Markus!
Actually, I’m surprised that I’m the first person all over the internet querying such an obvious thing. It’s a very important for my projects as technology looks very promising. Waiting for your clarifications.
after a couple of confused looks regarding this question I can explain a bit. It is one of those things…
TSR is associated with setting
r.AntialiasingMethod 4, that is correct. But it does not hold the other way round, if you have a different supersampling method/feature/tool enabled. That means if you set it to
4 AND have DLSS enabled, DLSS will override the engine internally and replace TSR with its own algorithms.
One thing that is often overlooked is that even if you say you “only” use DLAA and no DLSS, what it really means is that you use DLSS in DLAA mode without any scaling. But DLSS is still enabled.
You can check this yourself. If you start up
stat GPU to see GPU usage statistics on screen and then set
r.AntialiasingMethod 0 you will see that DLSS is disabled automatically. And with it DLAA.
The other forum post relates to the fact that it was possible in the past to force enable TSR even if DLSS was set to be enabled, which led to a DLSS warning in the logs and unexpected render results or crashes.
cvar should basically not be touched at all if you want to use any part of DLSS.
I hope that made it clearer? If not, please let me know.
Thanks, Markus, for your hard work here and prompt replies. It’s clear what you are saying.
So it seems r.AntiAliasingMethod should be set exactly to a value of ‘4’ to make DLSS happen. Also true for NIS and maybe other NVIDIA technologies. Other modes produce unexpected results:
Not necessarily cvar. I believe not all developers use TSR in their workflow, one may just enable NVIDIA plugins with their favourite AA as project setting.
Forcing of TSR deep in Sample Scene blueprints was more of a lucky find. Such an important thing should be mentioned somewhere in a quick guide notes. Leaving it here for all followers. Please correct me if I’m wrong, Markus.