Okay, getting bombarded with this new framegen. I love it, but obviously still not perfect. And still demanding tons of compute that demands raw power. So my idea is called live gen. But I have a theory as to how to reduce compute power and increase frames with zero artefacts.
Hopefully leading to cooler GPUs and higher boosted clock speeds. It might mean the game engine, and/or drivers involved. When generating a frame, the GPU creates pixel formulas/or pixel block formulas of colour instead of just a pixel colour. Therefore lowering CPU input data speeds, The images can be generated based on a person’s input devices + CPU and whatever the game changes. So these pixel/or pixel block formulas don’t even need to be generated every frame. We know the data inputs, tessellation, and ray tracing pre-calculated for most of the input creating the pixel formula colours. The screen output is just recalculating using a pixel formula on a time scale to get the correct colour. Therefore reducing the actual frame generated time to a simple calculation per pixel, or calculation per pixel block. Therefore generating frames at the exact monitor refresh rate.
So what happens is we get input turning right, calculations on all stable objects can be slowed to from 30 to 5 fps (like speeds - based on smoothness of inputs) creating pixel formulas. Active areas such as particles, or action areas, requiring cpu data input, obviously need more pixel formulas generated but likely they’ll be simpler pixel formulas. The simplest will be non-dynamic heads up displays where the pixel is one colour, and the pixel formula only gets refreshed when it changes. The pixel formulas are joined together like a midi player with 4 billion channels, or less if you go by pixel block or a lower resolution. And then the output to monitor is reduced to only 4 billion or less calculations getting the right pixel colour. So yes, it’ll increase the processing needed per pixel formula/pixel block formula rather than just a pixel, but the savings will be in the generation of the frame itself. I’d say about the same or very likely less than what multiframe generation costs. Since the multiframe generation has to create it’s own pixel formulas.
In that, I really think a GPU could easily generate a “True - zero artifact” 600 fps 4K gaming experience and needing less processing from both GPU and CPU and therefore generating less heat, likely be able to boost clock speeds only for the need of lowering latency. As to each pixel formula. It would likely be a colour scale based off a texture map, direction/speed pixel is traveling on texture map, colour pre-changed by ray or path tracing. In that, a pixel formula can become complex, but most will be simple scale of colours and only some might have a complex scale formula. Latency could go either way depending on the amount of changes/item changes or massive colour variations the textures have. But I think this idea will benefit the single player hyper realistic games the most. I also think TAA or DLAA will not be needed.
Any response to this is a good response. It’ll help me stick to my true lane of expertise outside of computing if the idea is a truly dumb idea.