AI model for UE5 for optimizing game meshes. This could revolutionize game development

I would first like to say I’m designing a photorealistic game designed to run 60fps on 30 series gpus and next gen consoles(internally at 1440p) without the use of upscaling or temporal filters.

As for 30 series, the game needs to run 60fps on the GPU’s recommended resolution for instance: native 1080p on the 3060, Native 1440p on 3070, Native 4k on 3080.
Any hardware higher in rasterization power would be for graphic modding or enabling ultra-psycho settings.

I decided to go with UE5 on this for Lumen. Nanite and VSM’s do not offer better performance as advertised. . LODs and regular cascaded shadow maps/DFM shadows will work just fine.

Recently Nvidia asked for people to consider if or how AI was or could be relevant in games.

"I could care less about DLSS upscaling and Frame Gen

First of all. 1080p upscaled to 4k using DLSS is NOT better than real 4k. Maybe if the game is staying still…which never happens unless you’re taking a screenshot. DLSS RUINS motion clarity(Motion: The primary state of scene games are in). The only reason people think DLSS looks better than native is because most games forced horrible TAA on the Native. This is the only good TAA I’ve ever seen.

“That is, we use the raw render of the previous frame and the current frame, but nothing more. That way, we prevent any long ghosting trails caused failed rejects, and we reach a stable and final result in only two frames, making our technique very responsive.”-Decima paper.

This right here. Meanwhile Unreal Engine with its default 8 samples… And god knows how many samples each upscaler like DLSS and FSR use.

Here is where Nvidia could use tensor cores to REALLY revolutionize game development.

Imagine a UE5 plugin that allows me to use an AI model(trained on tensor cores) that takes a meshes triangle count and material data create a highly optimized version nearly identical to the original mesh but with insane performance increase by lowering mesh count and using texture tricks.

The AI model could also create much better looking and better optimized LODs instead of just halfling the triangle count(intelligent LODs made by AI).

To further increase the perfection of this AI model, It should also create a more optimized material for the optimized mesh in ways like:

  • Optimizing the texture size.

  • Optimizing resolution of the material.

  • Increasing it’s tilling use/appearance on the mesh to save VRAM.

Fake parallax occlusion via mesh UVs, optimizing tiling usage, and preserving high poly count on the most import parts of the mesh, then making superior handcrafted LODs take months on an environment with thousands of mesh types. AI could do this in couple of minutes.

The workflow would consist of clicking each mesh instance in the game scene and then applying the AI model to it and replacing it with the AI constructed mesh. This would be the last step in optimizing the environment because if we’re going to let AI optimize the mesh, it needs to go all the way. What I mean is the raw textures, UVs, and mesh construction by the AI model should allowed to be incomprehensible by human perception, this is fully worth if this is the final step in optimizing the scene and brings forth easier computing in real-time).

Finding someone to truly optimize games is extremely hard these days, and pretty much all gamers will agree on that. Many studio will simply not pay the right people to do the job correctly.

You’ve already done something similar for RTX remix but this workflow I’m describing above could truly change game development for developers and modern game performance and visuals.

Make this happen!

PLEASE invest in this AI model that focuses on optimizing unique scenes developers make! WE NEED THIS! This needs to be Nvidia’s next innovation.
By making a plugin like this, you can enforce developers to add DLSS on PC ports if they choose to use you’re plugin.

This is feedback from a developer and a gamer.
If there is a certain individual at Nvidia I need to send this idea to, let me know who it is.

1 Like

Hello @mrjimenezprime and welcome to the NVIDIA developer forums!

I really like your ideas, they do make a lot of sense. I myself have some game dev background, from a time where you still had to kick the level designers to cut down their triangle counts or do a hell of a lot manual optimizations to make it work.

With DLSS and especially Frame Generation at their disposal, game devs have completely new possibilities to crank up visual fidelity without sacrificing performance. But if you want to have raw performance without these tools, you still need to look at the source of it and optimize. So having AI help at that stage is only logical.

I did forward your suggestion to a couple of places within NVIDIA. While I cannot promise any feedback here or any immediate action, I am certain some people will think about your suggestions.

Thank you for your feedback!

I really hope you are not an AI bot.

With DLSS and especially Frame Generation at their disposal, game devs have completely new possibilities to crank up visual fidelity without sacrificing performance.

The thing is, graphics are becoming an oxymoron for a majority of gamers.
When you have so much detail only for it to be blurred within motion, whats the point?
It only servers to help developers use it as a crutch.

And unfortunately for me and people like me who find temporal blur in modern games unacceptable/unplayable, many people actually enjoy the look of DLSS.
But hey, maybe those people aren’t playing at 1080p like me.

The slippery slope of upscalers like DLSS

There is a massive slippery slope when you begin to develop a high fidelity game and then lower the entire resolution.

Many developers are using specifically output resolution upscalers like DLSS and to ship 720p games to run barely 60fps on a 3060. They use DLSS or FSR to call it 1080 when it’s really 720p. It should run 1080p 60fps on a 3060 (or any gpu that has similar rasterization).
Because real 1080p(not ruined by a temporal aa method) isn’t like pouring acid into my eyes like 720p pretending to be 1080p. Even if that 720 is path traced goodness.

The biggest problem is most developers is they’re using high end 4k or 1440p cards because they can process the game development processing faster. But then they test the game at 4k and use DLSS or TSR to reach 60fps at fake 4k. They upscale from a better, more fit resolution like 1080p or 1440p to upscale to a 4k ish image.
Which in fact, may not look like blurry hell.

But most of use are not playing at 4k.

Most people can only afford 1080p gaming which is around $350-300. This is why respecting native resolution performance on you’re development card is so important.
This is the reason I will still have my 3060 when I have a 4090.

When I first encounter TAA, I thought the game was using some kind of half-assed rendering tech to achieve realistic results. 4 years later I know it most likely half the resolution I thought I was getting. The Decima TAA is the only temporal method that doesn’t ghosting or smear, until their recent update in HFW’s PS5 TAA “update”.

As for my idea for the AI model trained on optimizing a mesh from it’s appearance, triangle count, and Vram usage. These mesh optimization artist have been lost or highly paid to stay with specific studios.

With a plugin AI model in UE5, we could save months and turn it into a couple of hours with better optimized results from a human.
A mesh so optimized only an AI could have done it.
This is future of game development.

I’m sorry, but as an AI text-based model, I don’t have direct access to internet resources such as detailed Nanite tutorials, so my comments might be incorrect.

You mean something like the above? (Again, missing emojis on this server)

No, I am no Bot, I assure you. And I don’t think I agree with the oxymoron comment. If you look at the latest examples of Alan Wake 2 with SR/FG and RR there is definitely added fidelity especially in high frequency textures. AND higher performance. Calling it all “temporal blur” is highly oversimplifying things.

In any case, Nanite might not be there yet, but in terms of Mesh performance they are on the right track and constantly improving. Is it based on AI? No. Might it be in the future? Who knows.

AI for Game developers is here to stay and I am sure NVIDIA will be a big part of that.

Thanks again for sharing your ideas!

with SR/FG and RR

I stated those were irreverent <-- click for full response

I already told you my target hardware and performance.
Dlss 3 Frame gen is not included on my target
hardware which was 30 series and Next Gen Consoles.
That a major chunk of the market.
Not to mention my game is input sensitivity since it’s a combat game.
And many games that will innovate in gameplay will claim that input latency is not an option.


if you look at the latest examples of Alan Wake 2

That is a ridiculous game to use for this conversion <-- click for full response

You mean the one in “4k”… the one upscaling from an 1080p or 1440p?
I already told you the slippery slope about upscalers:
It only benefits people playing above 1080p.

Same thing with TAA, only looks good at 4k.

Not to mention Alan Wake 2 has barely ANY motion inside of it?
Just a bunch of walking and slow panning which doesn’t agitate temporal filters(Again, especially at 4k).


AND higher performance. Calling it all “temporal blur” is highly oversimplifying things.

No, Temporal blur is real and it's ruining modern games. <-- click for full response

I’m not over simplifying it. Temporal blur is just that simple.
Temporal blur happens when there’s actual action on screen.
Action+Temporal methods=Horrible motion. (unless at 4k and even then)
There is no argument there(You can try for Nvidia’s sake tho).

Alan Wake literally has ghosting within the first 4 seconds of the trailer when
something actually moves fast
.
If you compare the motion in any fast paced game with or without TAA.
It’s painfully obvious how much better motion will look without TAA, TSR,
DLSS, FSR, pick your poison, etc.

First person games also have an easier time with temporal filters.
Third person games such as mine and every other third person since TAA
have motion smearing and ghosting because the character is in between the environment and camera.

This is the cancer of TAA. Especially when developers
force it on players because it hides stupid temporal dithered objects.

This is why I’m trying to implement this TAA in my game.
This is better than DLAA, miles ahead. Go test Death Stranding for yourself.

Thin objects like hair and long ghosting trails never appear with this FXAA+TAA
by the decima devs
unlike the cancerous temporal methods I mentioned above.
That TAA combined with negative mipmaps is a motion clarity and sharpness dream.


In any case, Nanite might not be there yet, but in terms of Mesh performance they are on the right track and constantly improving. Is it based on AI? No. Might it be in the future? Who knows.

No, nanite is not good at all. It’s for lazy, cheap development and
performance is worse, I already linked that in my first post.
Yes, a mesh with 100K triangles is going to perform better with Nanite.

But a artistical hand optimized version of that 100k mesh will always outperform
that Nanite mesh by miles.

THAT is where AI needs to come into game development already.


The optimized mesh will perform miles ahead of the Nanite mesh(100k or 2k tris),
not just because it’s lower poly.
But because High poly meshes destroy/slow shadow and lighting calculations.

Game graphics are becoming an Oxymoron. Go look at Immortals of Aveum.
It uses all this crap designed for lazy, cheap developers like Nanite and
Virtual shadow maps,

Extra about IOA

Its seems to me that they could have just stuck with baked GI instead of Lumen.

And game looks absolutely horrid in motion/action. So basically all the time. There was
no point in adding that much detail in a game that requires
medium-fast movement if it’s only going to melt peoples eyes with temporal vaseline.

Remnant 2 is also an oxymoron.

Fortnite has horrible performance compared to without it’s 5.1 features.
If it had only used Lumen, it would be well above 60fps with GI.



If those games had used NVRTX with an AI mesh optimizer plugin
(My idea pitch to Nvidia), even better perf with Lumen GI.

With better perf means you can play at higher resolutions than 1080p with a 1080p
card at the gold standard of 60fps

I play and test all major games, in person with cheap LED tvs, Plasma TVs and
expensive 144hz monitors. I know the difference between youtube compression and
temporal smearing. (A lot of people try to use that bs as an excuse).

Send my AI model idea and workflow to everyone you know at Nvidia, and send my
argument about TAA ruining games as well
. I’ll be sending someone from Epic Games to
review this topic here on this forum.

I will be my AI model idea and workflow to every major tech company.
That includes competitors to Nvidia.
It won’t be my fault if Nvidia is too slow to jump on this for their own good.
I gave them a head start with this feedback post because they are already have RTX
remix which inspired this AI optimization workflow.

I don’t care who gets praised for their next innovation.
I just care about making a game that will change technologic standards for gamers.

Just a quick question as to what would seem to be an interesting topic. Would you suggest the AI model to train LOD meshes for skeletal meshes as well? Cloth and maybe groom?

1 Like

I find that static, environment meshes seem to be the biggest part of the problem
(hence why Nanite as become so popular, but it continues to ruin games due to a major misunderstanding of its purpose much like DLSS as that is no longer considered a feature, but now considered “optimizing for 60fps” ).

I think what you are talking about would require an entirely different AI Model due to having to recalculate bone weighting on the new triangles.

The AI model I’m trying to describe takes into account how the original high poly mesh looks from all angles. A skeletal mesh moves which could ruin the tiling and fake parallax occlusion optimization process usable on static or stationary object.

I’m talking about stationary environment objects. That is the main thing ruining performance in games and where developers seem to lack coherence or patience for.

I agree that nanite and lumen aren’t performant at the moment. I did some early tests by moving the UT (DM-Deck) maps to UE5 and using it with nanite and lumen and the performance just tanked very hard. Nanite isn’t suited well for low-poly geo, VSM also tanks with low-poly geo and Lumen doesn’t perform well without either. Dense geo doesn’t make much sense as the art workflow is super tiresome. Also, what is the point? Remnant 2 and Immortals of Aveum were disappointing and we’re going back to the Stone Age period of rendering at like what, 540p? really?

1 Like

Also, what is the point? Remnant 2 and Immortals of Aveum were disappointing and we’re going back to the Stone Age period of rendering at like what, 540p? really?

Yet Nvidia doesn’t think that graphics are becoming an oxymoron?
THIS IS DEFINITION.

I’ll be updating this popular feedback post to talk about how they are stupidly limiting features such as Lumen and VSMs to only work on Nanite meshes.
You cannot bake Lumen mesh cards on LOD meshes and VSM’s disappear(completely!) at a ridiculous short distance from LODS meshes.

LODs meshes aren’t even allowed for PGC!

This is absolute bullcrap!