XeSS SDK 2.1 release opens up Intel's framegen tech to compatible AMD and Nvidia GPUs — Xe Low Latency also goes cross-platform if framegen is enabled
Intel has officially released the XeSS 2.1.0 SDK, a major update to its Xe Super Sampling (XeSS) framework that brings broader GPU support and unlocks frame generation for AMD and Nvidia cards. The update adds Xe Frame Generation support for any GPU that supports Shader Model 6.4 or higher, including GeForce GTX 10-series and newer, and AMD's Radeon RX 5000 series onward. However, Intel recommends a GeForce RTX 30-series or Radeon RX 6000-series GPU or newer for the best experience.
Previously, only Intel Arc users had access to the full XeSS 2 feature set, which includes super resolution (XeSS-SR), frame generation (XeSS-FG), and low-latency mode (XeLL). With version 2.1.0, all three features can now be implemented on non-Intel GPUs. However, low latency rendering only kicks in when FG is active on non-Intel cards, so users and developers will need to enable both features for the benefits to apply.
Although this opens the door to wider adoption, support for the XeSS 2.1 SDK isn't automatic. Even if a game already supports XeSS 2, developers will still need to integrate the new SDK version to enable compatibility for Nvidia and AMD GPUs. For engines that don't yet support XeSS at all, developers must link the updated SDK and adjust internal configuration to target other vendors.
Under the hood, XeSS 2.1 uses DP4a instructions to run its convolutional neural networks on non-Intel GPUs—offering a fallback path where Intel's XMX cores aren't available. On Intel Arc Alchemist and Battlemage GPUs, XMX still delivers better efficiency and performance for frame generation due to dedicated matrix acceleration. In contrast, Nvidia and AMD cards use a compute shader-based version that could deliver slightly lower image quality at a slightly higher cost in resources.
Image 1 of 3
Image 2 of 3
Image 3 of 3
Despite the SDK's capability expansion, just 22 titles support XeSS 2 right now, according to Intel's own tracking, and to our knowledge, that number hasn't recently increased. Between the developer effort required to update games to the latest SDK and the slow uptake of XeSS 2 in general, it's possible that the real-world availability of XeSS 2.1 and its cross-platform benefits will be highly limited.
Still, though, the fact that integrating the XeSS 2 SDK now enables framegen and low latency features across all GPU vendors could make it more appealing for developers who have to balance the time and resource investment of integrating these features against the expected user benefit.
AMD's cross-platform support for upscaling and framegen support in its FSR 3.x suite of technologies have won wide adoption for those features, so we can only hope that developers see the same appeal in XeSS 2.1. As cutting-edge upscaling tech becomes increasingly vendor-locked (see FSR 4 and DLSS), Intel's cross-platform support for both upscaling and framegen could be an increasingly rare approach.
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
a few seconds ago
- Yahoo
Nvidia's ‘most underappreciated' business is taking off like a 'rocket ship'
When Nvidia (NVDA) reports its second quarter earnings on Aug. 27, investors will focus squarely on the company's data center results. After all, that's where the chip giant realizes revenue on the sale of its high-powered AI processors. But the Data Center segment includes more than just chip sales. It also accounts for some of Nvidia's most important, though often overlooked, offerings: its networking technologies. Composed of its NVLink, InfiniBand, and Ethernet solutions, Nvidia's networking products are what allow its chips to communicate with each other, let servers talk to each other inside massive data centers, and ultimately ensure end users can connect to it all to run AI applications. 'The most important part in building a supercomputer is the infrastructure. The most important part is how you connect those computing engines together to form that larger unit of computing,' explained Gilad Shainer, senior vice president of networking at Nvidia. That also translates into some big sales. Nvidia's networking sales accounted for $12.9 billion of its $115.1 billion in data center revenue in its prior fiscal year. That might not seem impressive when you consider that chip sales brought in $102.1 billion, but it eclipses the $11.3 billion that Nvidia's second-largest segment, Gaming, took in for the year. In Q1, networking made up $4.9 billion of Nvidia's $39.1 billion in data center revenue. And it'll continue to grow as customers continue to build out their AI capacity, whether that's at research universities or massive data centers. 'It is the most underappreciated part of Nvidia's business, by orders of magnitude,' Deepwater Asset Management managing partner Gene Munster told Yahoo Finance. 'Basically, networking doesn't get the attention because it's 11% of revenue. But it's growing like a rocket ship.' Connecting thousands of chips When it comes to the AI explosion, Nvidia senior vice president of networking Kevin Deierling says the company has to work across three different types of networks. The first is its NVLink technology, which connects GPUs to each other within a server or multiple servers inside of a tall, cabinet-like server rack, allowing them to communicate and boost overall performance. Then there's InfiniBand, which connects multiple server nodes across data centers to form what is essentially a massive AI computer. Then there's the front-end network for storage and system management, which uses Ethernet connectivity. 'Those three networks are all required to build a giant AI-scale, or even a moderately sized enterprise-scale, AI computer,' Deierling explained. The purpose of all of these various connections isn't just to help chips and servers communicate, though. They're also meant to allow them to do so as fast as possible. If you're trying to run a series of servers as a single unit of computing, they need to talk to each other in the blink of an eye. A lack of data going to GPUs slows the entire operation, delaying other processes and impacting the overall efficiency of an entire data center. '[Nvidia is a] very different business without networking,' Munster explained. 'The output that the people who are buying all the Nvidia chips [are] desiring wouldn't happen if it wasn't for their networking. ' And as companies continue to develop larger AI models and autonomous and semi-autonomous agentic AI capabilities that can perform tasks for users, making sure those GPUs work in lockstep with each other becomes increasingly important. That's especially true as inferencing — running AI models — requires more powerful data center systems. Inferencing powers up The AI industry is in the midst of a broad reordering around the idea of inferencing. At the onset of the AI explosion, the thinking was that training AI models would require hugely powerful AI computers and that actually running them would be somewhat less power-intensive. That led to some trepidation on Wall Street earlier this year, when DeepSeek claimed that it trained its AI models on below top-of-the-line Nvidia chips. The thinking at the time was that if companies could train and run their AI models on underpowered chips, then there was no need for Nvidia's pricey high-powered systems. But that narrative quickly flipped as chip companies pointed out that those same AI models benefit from running on powerful AI computers, allowing them to reason over more information more quickly than they would while running on less-advanced systems. 'I think there's still a misperception that inferencing is trivial and easy,' Deierling said. 'It turns out that it's starting to look more and more like training as we get to [an] agentic workflow. So all of these networks are important. Having them together, tightly coupled to the CPU, the GPU, and the DPU [data processing unit], all of that is vitally important to make inferencing a good experience.' Nvidia's rivals are, however, circling. AMD is looking to grab more market share from the company, and cloud giants like Amazon, Google, and Microsoft continue to develop their own AI chips. Industry groups also have their own competing networking technologies including UALink, which is meant to go head-to-head with NVLink, explained Forrester analyst Alvin Nguyen. But for now, Nvidia continues to lead the pack. And as tech giants, researchers, and enterprises continue to battle over Nvidia's chips, the company's networking business is all but guaranteed to keep growing as well. Email Daniel Howley at dhowley@ Follow him on X/Twitter at @DanielHowley.


CNBC
3 minutes ago
- CNBC
Apple becomes latest company to offer cash olive branch to Trump. How do you invest around this?
(PRO Views are exclusive to PRO subscribers, giving them insight on the news of the day direct from a real investing pro.) Tim Seymour acknowledges it's hard to navigate the different trade headlines hitting each day, but investors should realize most of these so-called deals are not "real" news and focus on the company fundamentals instead. The founder of Seymour Asset Management is a buyer of Apple (AAPL) today, but not because CEO Tim Cook is giving President Donald Trump a $100 million peace offering with a promise to manufacture more of its products on U.S. shores. AAPL 1D mountain Apple, 1-day The stock is rocking on the news but, as he explains in this video above, Seymour is buying because of Apple's amazing second quarter where revenue increased by 10% and operating margins overseas expanded for the third-straight period. Seymour discusses the next catalyst for Apple shares. In general, he said investors should not be swayed by these headlines that can sometimes boost or knock certain stocks depending on whether the companies are playing along with Trump's trade narrative or not. Tim and I also discuss: Whether Nvidia is winning the trade gamesmanship with Trump Implications of India's new 50% tariff rate The U.K.'s new friendly relationship with Trump and the best stocks to buy there The history of the Grateful Dead and Golden Gate park Disclosure: Seymour is a portfolio manager, sub-advisor on the Amplify International Enhanced Dividend (IDVO) .
Yahoo
21 minutes ago
- Yahoo
AMD data center results disappoint, shares slump
By Arsheeya Bajwa and Max A. Cherney (Reuters) -Advanced Micro Devices on Tuesday reported disappointing data center revenue, a segment which includes lucrative artificial intelligence chips that investors are betting on for rapid growth. Shares of the Santa Clara, California-based company slumped roughly 4% in extended trading. AMD's shares have climbed more than 40% this year, far outpacing a nearly 12% jump in the benchmark chip index, as investors bet on the company's ability to capitalize on the widespread use of AI. Chips that power complex AI systems for Microsoft, Meta Platforms, generative AI leader OpenAI and other customers are still feverishly sought after by tech companies. Meta has raised the bottom end of its annual capital expenditure forecast by $2 billion to a range of $66 billion to $72 billion. Microsoft forecast a record $30 billion in capital spending for its current fiscal first quarter to meet soaring AI demand. However, AMD has not benefited from the AI spending splurge to the same degree as rival Nvidia. "Investors may be paying closer attention to their data center segment as they roll out new products to compete with NVDA and go after more reliable customers," said Carson Group chief market strategist Ryan Detrick. In Nvidia's fiscal first quarter, its data center segment jumped 73% to $39.11 billion as companies scrambled to adopt the company's flagship Blackwell chips and systems. Nvidia's data center business includes its graphics processors (GPUs) and networking hardware. By comparison, AMD's second-quarter data center revenue grew 14% to $3.2 billion, roughly in line with analysts' expectations of $3.22 billion, according to LSEG estimates. Beyond its Instinct AI chips, AMD also includes server processors (CPUs) in its data center segment. AMD's relatively lackluster data center revenue in the second quarter was "enough to raise an eyebrow," Dan Morgan, portfolio manager at Synovus Trust, an AMD and Nvidia shareholder. "AMD trades off of data center." In the conference call after AMD reported results, CEO Lisa Su said the company's AI chip revenue declined year-over-year because of the U.S. export restrictions on exports to China and the transition to next-generation MI350 series AI chips. The company began volume production of the MI350 series ahead of schedule in June, Su said, and AMD expects a steep ramp-up in production of the chip in the second half of this year. CHINA LICENSES UNDER REVIEW The chip designer's third-quarter revenue of about $8.7 billion, plus or minus $300 million, compared with analysts' average expectation of $8.30 billion, data compiled by LSEG showed. AMD estimated third-quarter adjusted gross margins of roughly 54%, compared with estimates of 54.1%. The outlook excludes revenue from AMD's AI chip MI308's shipments to China as license applications are under review by the U.S. government, the company said. AMD said last month the Department of Commerce would review its license applications to export its MI308 chips to China and it plans to resume those shipments when licenses are approved. U.S. curbs announced in April required it to obtain a license to ship advanced AI processors to China. AMD had forecast a $1.5 billion hit to revenue this year due to these curbs, with most of the impact affecting the second and third quarters. Adjusted for stock-based compensation and other items, AMD reported a second-quarter profit of 48 cents a share on revenue of $7.69 billion. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data