I would first like to say I’m designing a photorealistic game designed to run 60fps on 30 series gpus and next gen consoles(internally at 1440p) without the use of upscaling or temporal filters.
As for 30 series, the game needs to run 60fps on the GPU’s recommended resolution for instance: native 1080p on the 3060, Native 1440p on 3070, Native 4k on 3080.
Any hardware higher in rasterization power would be for graphic modding or enabling ultra-psycho settings.
I decided to go with UE5 on this for Lumen. Nanite and VSM’s do not offer better performance as advertised. . LODs and regular cascaded shadow maps/DFM shadows will work just fine.
Recently Nvidia asked for people to consider if or how AI was or could be relevant in games.
"I could care less about DLSS upscaling and Frame Gen
First of all. 1080p upscaled to 4k using DLSS is NOT better than real 4k. Maybe if the game is staying still…which never happens unless you’re taking a screenshot. DLSS RUINS motion clarity(Motion: The primary state of scene games are in). The only reason people think DLSS looks better than native is because most games forced horrible TAA on the Native. This is the only good TAA I’ve ever seen.
“That is, we use the raw render of the previous frame and the current frame, but nothing more. That way, we prevent any long ghosting trails caused failed rejects, and we reach a stable and final result in only two frames, making our technique very responsive.”-Decima paper.
This right here. Meanwhile Unreal Engine with its default 8 samples… And god knows how many samples each upscaler like DLSS and FSR use.
Here is where Nvidia could use tensor cores to REALLY revolutionize game development.
Imagine a UE5 plugin that allows me to use an AI model(trained on tensor cores) that takes a meshes triangle count and material data create a highly optimized version nearly identical to the original mesh but with insane performance increase by lowering mesh count and using texture tricks.
The AI model could also create much better looking and better optimized LODs instead of just halfling the triangle count(intelligent LODs made by AI).
To further increase the perfection of this AI model, It should also create a more optimized material for the optimized mesh in ways like:
-
Optimizing the texture size.
-
Optimizing resolution of the material.
-
Increasing it’s tilling use/appearance on the mesh to save VRAM.
Fake parallax occlusion via mesh UVs, optimizing tiling usage, and preserving high poly count on the most import parts of the mesh, then making superior handcrafted LODs take months on an environment with thousands of mesh types. AI could do this in couple of minutes.
The workflow would consist of clicking each mesh instance in the game scene and then applying the AI model to it and replacing it with the AI constructed mesh. This would be the last step in optimizing the environment because if we’re going to let AI optimize the mesh, it needs to go all the way. What I mean is the raw textures, UVs, and mesh construction by the AI model should allowed to be incomprehensible by human perception, this is fully worth if this is the final step in optimizing the scene and brings forth easier computing in real-time).
Finding someone to truly optimize games is extremely hard these days, and pretty much all gamers will agree on that. Many studio will simply not pay the right people to do the job correctly.
You’ve already done something similar for RTX remix but this workflow I’m describing above could truly change game development for developers and modern game performance and visuals.
Make this happen!
PLEASE invest in this AI model that focuses on optimizing unique scenes developers make! WE NEED THIS! This needs to be Nvidia’s next innovation.
By making a plugin like this, you can enforce developers to add DLSS on PC ports if they choose to use you’re plugin.
This is feedback from a developer and a gamer.
If there is a certain individual at Nvidia I need to send this idea to, let me know who it is.