The first GPU company to offer it was Nvidia in 2022, followed by AMD one year later, and now Intel has joined in the fun. I am, of course, talking about frame generation and while none of the systems are perfect, they all share the same issue: increased input latency. However, [[link]] researchers at Intel have developed a frame generation algorithm that adds no lag whatsoever, because it's frame extrapolation.
If you've a mind for highly technical documents, you can read the full details about how it all works at one of the researcher's GitHub. Just as with all rendering technologies, this one has a catchy name and suitable initialisation: G-buffer Free Frame Extrapolation (GFFE). To understand what it's doing differently to DLSS, FSR, and XeSS-FG, it helps to have a bit of an understanding of how the current frame generation systems work.
In theory, that means GFFE could be applied on a driver level, rather than [[link]] requiring integration in the game's rendering pipeline. And best of all, because no frames are being held back, there's hardly any input lag.
This is where frame extrapolation comes into play. Rather than holding rendered frames back in a queue, the algorithm simply keeps a history of what frames have been rendered before and uses them to generate a new one. The system then just adds the extrapolated frame after a 'normal' one, giving the required performance boost.
Such systems aren't new and they've been in development for many years now, but nothing has appeared so far to match the likes of DLSS, in terms of real-time speed. What sets GFFE apart is that it's pretty fast (6.6 milliseconds to generate a 1080p frame) and it doesn't require access to a rendering engine's motion or vector buffers, just the complete frames.
Best CPU for gaming: The top chips from Intel and AMD.
Best gaming motherboard: The right boards.
Best graphics card: Your perfect pixel-pusher awaits.
Best SSD for gaming: Get into the game ahead of the rest.
There will always be some with frame generation, interpolated or extrapolated, because the AI-created frames will never have exact input changes, just estimated ones. So those frames will always feel a little bit 'wrong' [[link]] but as mentioned before, they exist so fleetingly, that you're unlikely to really notice.
Frame extrapolation is the natural evolution for DLSS, FSR, and XeSS to take, and this work by Intel and the University of California shows that we're probably not far off seeing it in the wild. With all three GPU companies on the verge of releasing new chips (Intel has already announced Battlemage), I suspect they will be joined or rapidly followed by a DLSS and FSR that uses AI to extrapolate motion and new frames.
We all want next-gen GPUs to have more shaders, cache, and bandwidth for games, but we're probably nearing a bit of a plateau in that respect. Graphics cards of the near future will be leveraging neural networks ever more to upscale and generation, to improve performance. If you can't tell that they're being used, though, then I guess it doesn't matter how those pixels are being made.
BetQueen275
Website layout is very clean, intuitive, and easy to navigate. I can quickly find my favorite games, access promotions, and check my account details without any confusion. It’s a pleasure to use.