Why not use Deep Learning for Deinterlacing interlaced rendering?

Really not sure where to put this, but it’s an idea that has sat with me for a long time. Would love to get this idea to someone who knows where to take it.

Current DLSS is built around two separate processes, Super Sampling and Frame Generation.

Using machine learning techniques to optimize and improve the quality of deinterlacing frames that are rendered interlaced by the application seems like an obvious improvement to this process. The game is rendering at effectively half the resolution, and can produce updated motion information (approximately) twice as often. Of course most modern game engines aren’t built for interlaced rendering, so this would likely require significant rethinking of their render pipeline. Probably why this hasn’t been done to date.

Temporal reuse of previous frame data and intelligent reprojection much like the current implementation of Frame Generation would certainly be helped by only having to work on half the image and having up to date surrounding reference information to work with.

As far as I can think of this would come with most of the benefits of resolution scaling with DLSS, and interpolating intermediate frames with Frame Generation, with better information to go off of.

Capcom’s RE Engine is capable of interlaced rendering and it provides a decent performance uplift, with further optimization utilizing Deep Learning techniques, I would be surprised if it couldn’t be taken further.