Intel Nova Lake CPUs reportedly get a GPU overhaul — Xe3 Celestial and Xe4 Druid IPs used for graphics, media, and display
Intel's upcoming Nova Lake chips are expected to advance their modular design philosophy by bringing together future Xe3 and Xe4 IPs to handle different engines on the chip. Jaykihn, an avid Intel leaker, asserts that Nova Lake-S will allegedly use Celestial for its graphics engine. At the same time, Druid will handle media and display functions, likely on a separate SoC Tile.
The disaggregated chiplet design, introduced for consumers with Meteor Lake, provides Intel with the flexibility to manufacture less critical chip elements using mature and cheaper fabrication nodes. Meteor Lake splits the media and display capabilities from core graphics. The media and display units were placed on a separate System-on-Chip (SoC) chiplet, manufactured using TSMC's N6 process, while the graphics engine resided on a separate tile produced with TSMC's N5 technology.
A similar strategy has been observed in Lunar Lake and Arrow Lake, however, Nova Lake reportedly is poised to advance Intel's chiplet approach by using separate and specialized IPs for these blocks. Jaykihn claims that the integrated graphics (iGPU) on Nova Lake-S (S: Desktop) will be powered by Xe3 (Celestial), meanwhile, the graphics and media engine move to the more advanced Xe4 (Druid).
This means that while the integrated GPU will be impressive, the most notable improvements could be in hardware codec support, owing to the shift to a next-generation architecture. Meanwhile, Bionic_Squash reports that the graphics engine will utilize a slightly modified version of Xe3, for better or worse. Should it be an improvement over vanilla Xe3, which we'll see in Panther Lake, it might be comparable to the evolution from Meteor Lake's Xe-LPG to Arrow Lake's Xe-LPG+ (mobile-only), the latter of which introduced XMX engines.
Beyond Nova Lake, this is a positive indication for future graphics products from Intel. We could see the first Druid-powered engines by as early as 2026, while Celestial, which is already undergoing pre-validation, is slated to power Panther Lake CPUs next year. It is plausible to say Nova Lake might serve as a test vehicle for Druid, serving as a precursor to a full-fledged product in the future that employs Druid graphics as well.
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
4 hours ago
- Yahoo
Is the cloud the wrong place for AI?
When you buy through links on our articles, Future and its syndication partners may earn a commission. The enterprise software playbook seemed clear: everything moves to the cloud eventually. Applications, databases, storage: they all followed the same inevitable arc from on-premises to software-as-a-service. But with the arrival and boom of artificial intelligence, we're seeing a different story play out, one where the cloud is just one chapter rather than the entire book. AI systems AI workloads are fundamentally different beasts than the enterprise applications that defined the cloud migration wave. Traditional software scales predictably, processes data in batches, and can tolerate some latency. AI systems are non-deterministic, require massive parallel processing, and often need to respond in real-time. These differences reshape the entire economic equation of where and how you run your infrastructure. Take the challenge of long-running training jobs. Machine learning models don't train on schedule; they train until they converge. This could be hours, days, or weeks. Cloud providers excel at providing infrastructure at short notice, but GPU capacity at hyperscalers can be hard to get without a 1 year reservation. The result is either paying for guaranteed capacity you might not fully use, or risking that your training job gets interrupted when using spot instances to reduce costs. Then there's the inference challenge. Unlike web applications that might see traffic spikes during Black Friday, AI services often need to scale continuously as customer usage grows. The token-based pricing models that govern large language models make this scaling unpredictable in ways that traditional per-request pricing never was. A single customer query might consume 10 tokens or 10,000, depending on the complexity of the response and the size of the context window. Hybrid approaches The most intriguing development involves companies discovering hybrid approaches that acknowledge these unique requirements rather than abandoning the cloud. They're using on-premises infrastructure for baseline, predictable workloads while leveraging cloud resources for genuine bursts of demand. They're co-locating servers closer to users for latency-sensitive applications like conversational AI. They're finding that owning their core infrastructure gives them the stability to experiment more freely with cloud services for specific use cases. This evolution is being accelerated by regulatory requirements that simply don't fit the cloud-first model. Financial services, healthcare, and government customers often cannot allow data to leave their premises. For these sectors, on-premises or on-device inference represents a compliance requirement rather than a preference. Rather than being a limitation, this constraint is driving innovation in edge computing and specialized hardware that makes local AI deployment increasingly viable. Infrastructure strategies The cloud providers aren't standing still, of course. They're developing AI-specific services, improving GPU access, and creating new pricing models. But the fundamental mismatch between AI's resource requirements and traditional cloud economics suggests that the future won't be a simple rerun of the SaaS revolution. Instead, we're heading toward a more nuanced landscape where different types of AI workloads find their natural homes. Experimentation and rapid prototyping will likely remain cloud-native. Production inference for established products might move closer to owned infrastructure. Training runs might split between cloud spot instances for cost efficiency and dedicated hardware for mission-critical model development. The approach represents a step toward infrastructure strategies that match the actual needs of AI systems rather than forcing them into patterns designed for different types of computing The most successful AI companies of the next decade will likely be those that think beyond the cloud-first assumptions and build infrastructure strategies as sophisticated as their algorithms. We've featured the best cloud storage. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here:


Newsweek
7 hours ago
- Newsweek
Donald Trump is Helping China in the AI Race. Why?
Based on facts, either observed and verified firsthand by the reporter, or reported and verified from knowledgeable sources. Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content. Nvidia has ordered 300,000 new top-line computer chips from the Taiwan Semiconductor Manufacturing Company (TSMC) following a huge boost in demand from China. The agreement over the H20 chipsets, which are some of the most advanced technology being produced in Taiwan's industry-leading factories, comes in the same month that Trump reversed a ban that stopped Nvidia from selling to China over security concerns. Newsweek contacted the Chinese Embassy and the White House for comment on this story via email. Why It Matters The January launch of DeepSeek, a Chinese-made AI model, , sparked what may have called a "cold war" over artificial intelligence development. Researchers developed DeepSeek with a fraction of the resources but managed to produce an AI capable of rivaling ChatGPT. In April, the Trump administration barred top U.S. suppliers, like Nvidia, from selling topline silicon to China over "national security concerns." The ban mirrored the 2022 CHIPS Act passed by former President Joe Biden, which increased semiconductor manufacturing in the U.S. while also clamping down on chip companies investing in China and Russia, the biggest competitors in the industry. What To Know The White House reversed its ban after barely three months, giving Nvidia the all-clear in July to resume sales with China. Nvidia founder and CEO Jensen Huang on July 14 said the U.S. government had assured him it would restore the licenses to sell H20s in China. "General-purpose, open-source research and foundation models are the backbone of AI innovation," Huang said. "We believe that every civil model should run best on the U.S. technology stack, encouraging nations worldwide to choose America." Half a month later, and Chinese demand for Nvidia chips has surged, with the U.S. company looking to replenish its stock with an order of 300,000 H20 chipsets from TSMC, one of the largest manufacturers in the world. US President Donald Trump shakes hand with China's President Xi Jinping at the end of a press conference at the Great Hall of the People in Beijing on November 9, 2017. US President Donald Trump shakes hand with China's President Xi Jinping at the end of a press conference at the Great Hall of the People in Beijing on November 9, 2017. Getty Images The return of Nvidia chips to the Chinese market will be a boon to the country's rapidly expanding AI industry, but there's concern that the U.S. could fall behind in an AI "race", as OpenAI CEO Sam Altman describes it, using U.S. chips to do so. In May, a report from the Georgetown University's Center for Security and Emerging Technology shared with Newsweek found that two of China's leading AI institutes, headquartered in Beijing, have established branches in Wuhan to cooperate on sophisticated alternatives to the large generative AI models. The report described the new labs' aim to "springboard to artificial general intelligence", overtaking the U.S. by focusing on other forms of AI as opposed to the western focus on large statistical models. What People Are Saying Alexandra Mousavizadeh, CEO of Evident and creator of the Global AI Index, told Newsweek that there were two different approaches to China's AI development: "You can continue to try and contain access to chips and close the walls off. While you're doing that, you're doubling down on investment into data infrastructure, supporting the development of AI in the U.S. and being first in that race, "Or you open up completely and you say, 'Look, it's to the benefit of all that everyone has access to everything, because the collaboration between Europe, the U.S. and China in the past has been what has led to the ability to get to where we are today.' A spokesperson for the Chinese Embassy told Newsweek: "China has consistently advocated that the development of artificial intelligence should adhere to principles that are human-centered and promote benevolence. "China believes that AI development should be fair and inclusive, ensuring that all countries equally enjoy the benefits brought by AI, and has no intention of seeking dominance in this field." What Happens Next China will continue to be able to purchase from Nvidia and other U.S. chip manufacturers unless the White House alters its policy.


Fox News
10 hours ago
- Fox News
Innovation with Heart: The Promise of AI
AI is evolving at lightning speed, opening doors to exciting breakthroughs, and raising a few eyebrows along the way. CEO and founder of Postilize, Jody Glidden, joins Janice for a thoughtful and hopeful conversation about how AI is transforming the way we work, live, and create. Jody sees a future full of promise, not just in innovation, but in strengthening our connections with the people we love. Learn more about your ad choices. Visit