Xbox Next could introduce a secret weapon to bring it back to the fight
When you buy through links on our articles, Future and its syndication partners may earn a commission.
Quick Summary
Microsoft is reportedly working on its next console, although recent rumours of a 2026 release have been dismissed by a couple of industry experts.
It is tipped to be a PC in a "TV-friendly shell" too, which would give it a significant advantage when it comes to games development.
There have been countless rumours on Microsoft's plans for future consoles of late, with some suggesting it'll ditch the home machine and concentrate on a Nintendo Switch 2 / Steam Deck rivalling handheld instead. Some even believe that we'll never see another console from the software giant, with the Xbox brand solely focusing on game releases and the cloud instead.
However, industry expert Jez Corden, of Windows Central, has put forward another theory – that the next-gen Xbox (lovingly called Xbox Next, for now) could actually be a PC in a living room friendly shape.
Speaking on The XB2 podcast, he and host Rand al Thor 19 discussed Xbox's possible future plans and explained that while next year is a target too soon, there's every likelihood that the next Xbox could arrive the following year as part of a strategy rethink.
"I think the next Xbox is coming out in 2027 and devkits will go out next year," speculated Rand.
This was in response to a "leak" posted last week that claimed next year's Call of Duty will launch on a new Xbox console, and that it was being developed using full devkits. However, that's
"There is no Xbox devkit right now," countered Corden. "That developers have already got the next-gen Xbox devkit is just not accurate.
"[But] assuming that the next Xbox is Windows-based, you could spec out a kind-of devkit that targets those specs. The whole idea of the next Xbox is that it's going to be a PC in essence, but with a TV-friendly shell that also has a specific set of specs in mind.
"So developers will be building for a PC in a way, so will know what the specs will be and optimise for it."
Whoever is right, it seems insiders and experts believe there will definitely be a follow-up to the Xbox Series X, even though Microsoft has been trounced in the console wars for the last two generations. And that, if there is an Xbox handheld on the way, it'll be joined by a "TV-friendly" machine.
By making it more of a PC in essence could also ensure third-parties develop for it in significant numbers – after all, they'll likely already be building games for the platform by default anyway.
We might just have to wait a bit longer for it.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
an hour ago
- Yahoo
Big Tech's AI Ambitions Could Supercharge These Energy Stocks
U.S. data centers are expected to consume 65 gigawatts of power between 2025 and 2028, about 45 GW more than existing capacity can accommodate, according to Morgan Stanley. Due to a mix of regulatory and economic hurdles, analysts expect AI providers and data-center operators to deploy "temporary, mobile generation" solutions to meet surging demand. Companies that make fuel cells, mobile natural gas turbines, and small modular nuclear reactors are some of the potential beneficiaries of this next phase of the AI infrastructure data centers are expected to consume a massive amount of energy in the coming years, and meeting that need could be a boon to some investments, according to Morgan Stanley. Morgan Stanley forecasts U.S. data centers will consume 65 gigawatts of power between 2025 and 2028, but available capacity could fall short by about 45 GW. To make up the difference, 'all potential 'de-bottlenecking' solutions will need to be drawn upon,' the analysts wrote in a note on Tuesday. Possible solutions, they say, include converting crypto mining operations into data centers, building data centers at large nuclear power plants, and constructing new natural gas-fired power plants. But all of that is easier said than done. First, the rising price of bitcoin could discourage miners from converting their mining facilities or selling excess power to data centers. Second, concerns about stressing regional power grids could compel regulators to mandate that new data centers not come online until additional power sources are connected to the grid. That's why Morgan Stanley expects to see hyperscalers and data-center owners adopt a 'bridge' approach, 'in which temporary, mobile generation is deployed' to address the regulatory and economic hurdles to quickly ramping power capacity. Small modular nuclear reactors are one solution that gives companies the flexibility they'll need. SMRs have the added benefit of providing reliable carbon-free energy that aligns with Big Tech's emission-reduction goals. However, small reactors are a nascent, "next decade technology," Morgan Stanley analysts said. For that reason, cloud hyperscalers like Microsoft (MSFT), Alphabet (GOOG), Amazon (AMZN), and Meta Platforms (META) increasingly have turned to existing nuclear infrastructure during the AI buildout of the past few years. Meta on Tuesday signed a 20-year deal with Constellation Energy (CEG), America's largest nuclear power provider, to sustain its AI. Constellation and Microsoft last year agreed to bring back online a reactor at Pennsylvania's Three Mile Island. With SMRs still a ways off, new data centers are likely to rely on small, mobile natural gas generators from the likes of GE Vernova (GEV) and Caterpillar (CAT). Hyperscalers may also buy from fuel cell manufacturers like Bloom Energy (BE), whose electricity servers are a low-carbon way to convert natural gas, biofuel, or hydrogen into power. These fuel cells, the analysts said, offer the benefit of short lead times, reliable equipment, the ability to add redundant capacity in the event of a unit failure, and exceptional flexibility in terms of power output. 'We believe [Bloom Energy] could quickly increase manufacturing capacity to ~3 GW per year, with the potential for further increases in output if demand grows,' the analysts wrote. 'Bloom Energy is in our view one of the under-appreciated beneficiaries of the rapid growth in data center power demand globally.' Read the original article on Investopedia Sign in to access your portfolio
Yahoo
an hour ago
- Yahoo
AI startups revolutionize coding industry, leading to sky-high valuations
By Anna Tong and Krystal Hu SAN FRANCISCO (Reuters) -Two years after the launch of ChatGPT, return on investment in generative AI has been elusive, but one area stands out: software development. So-called code generation or 'code-gen' startups are commanding sky-high valuations as corporate boardrooms look to use AI to aid, and sometimes to replace, expensive human software engineers. Cursor, a code generation startup based in San Francisco that can suggest and complete lines of code and write whole sections of code autonomously, raised $900 million at a $10 billion valuation in May from a who's who list of tech investors, including Thrive Capital, Andreessen Horowitz and Accel. Windsurf, a Mountain View-based startup behind the popular AI coding tool Codeium, attracted the attention of ChatGPT maker OpenAI, which is now in talks to acquire the company for $3 billion, sources familiar with the matter told Reuters. Its tool is known for translating plain English commands into code, sometimes called 'vibe coding,' which allows people with no knowledge of computer languages to write software. OpenAI and Windsurf declined to comment on the acquisition. 'AI has automated all the repetitive, tedious work,' said Scott Wu, CEO of code gen startup Cognition. 'The software engineer's role has already changed dramatically. It's not about memorizing esoteric syntax anymore.' Founders of code-gen startups and their investors believe they are in a land grab situation, with a shrinking window to gain a critical mass of users and establish their AI coding tool as the industry standard. But because most are built on AI foundation models developed elsewhere, such as OpenAI, Anthropic, or DeepSeek, their costs per query are also growing, and none are yet profitable. They're also at risk of being disrupted by Google, Microsoft and OpenAI, which all announced new code-gen products in May, and Anthropic is also working on one as well, two sources familiar with the matter told Reuters. The rapid growth of these startups is coming despite competing on big tech's home turf. Microsoft's GitHub Copilot, launched in 2021 and considered code-gen's dominant player, grew to over $500 million in revenue last year, according to a source familiar with the matter. Microsoft declined to comment on GitHub Copilot's revenue. On Microsoft's earnings call in April, the company said the product has over 15 million users. LEARN TO CODE? As AI revolutionizes the industry, many jobs - particularly entry-level coding positions that are more basic and involve repetition - may be eliminated. Signalfire, a VC firm that tracks tech hiring, found that new hires with less than a year of experience fell 24% in 2024, a drop it attributes to tasks once assigned to entry-level software engineers are now being fulfilled in part with AI. Google's CEO also said in April that 'well over 30%' of Google's code is now AI-generated, and Amazon CEO Andy Jassy said last year the company had saved 'the equivalent of 4,500 developer-years' by using AI. Google and Amazon declined to comment. In May, Microsoft CEO Satya Nadella said at a conference that approximately 20 to 30% of their code is now AI-generated. The same month, the company announced layoffs of 6,000 workers globally, with over 40% of those being software developers in Microsoft's home state, Washington. 'We're focused on creating AI that empowers developers to be more productive, creative, and save time,' a Microsoft spokesperson said. 'This means some roles will change with the revolution of AI, but human intelligence remains at the center of the software development life cycle.' MOUNTING LOSSES Some 'vibe-coding' platforms already boast substantial annualized revenues. Cursor, with just 60 employees, went from zero to $100 million in recurring revenue by January 2025, less than two years since its launch. Windsurf, founded in 2021, launched its code generation product in November 2024 and is already bringing in $50 million in annualized revenue, according to a source familiar with the company. But both startups operate with negative gross margins, meaning they spend more than they make, according to four investor sources familiar with their operations. 'The prices people are paying for coding assistants are going to get more expensive,' Quinn Slack, CEO at coding startup Sourcegraph, told Reuters. Both Cursor and Windsurf are led by recent MIT graduates in their twenties, and exemplify the gold rush era of the AI startup scene. 'I haven't seen people working this hard since the first Internet boom,' said Martin Casado, a general partner at Andreessen Horowitz, an investor in Anysphere, the company behind Cursor. What's less clear is whether the dozen or so code-gen companies will be able to hang on to their customers as big tech moves in. 'In many cases, it's less about who's got the best technology -- it's about who is going to make the best use of that technology, and who's going to be able to sell their products better than others,' said Scott Raney, managing director at Redpoint Ventures, whose firm invested in Sourcegraph and Poolside, a software development startup that's building its own AI foundation model. CUSTOM AI MODELS Most of the AI coding startups currently rely on the Claude AI model from Anthropic, which crossed $3 billion in annualized revenue in May in part due to fees paid by code-gen companies. But some startups are attempting to build their own models. In May, Windsurf announced its first in-house AI models that are optimized for software engineering in a bid to control the user experience. Cursor has also hired a team of researchers to pre-train its own large frontier-level models, which could enable the company to not have to pay foundation model companies so much money, according to two sources familiar with the matter. Startups looking to train their own AI coding models face an uphill battle as it could easily cost millions to buy or rent the computing capacity needed to train a large language model. Replit earlier dropped plans to train its own model. Poolside, which has raised more than $600 million to make a coding-specific model, has announced a partnership with Amazon Web Services and is testing with customers, but hasn't made any product generally available yet. Another code gen startup Magic Dev, which raised nearly $500 million since 2023, told investors a frontier-level coding model was coming in summer 2024 but hasn't yet launched a product. Poolside declined to comment. Magic Dev did not respond to a request for comment.


Geek Wire
2 hours ago
- Geek Wire
Reality check: Microsoft Azure CTO pushes back on AI vibe coding hype, sees upper limit long-term
Microsoft Azure CTO Mark Russinovich speaks at a Technology Alliance event Tuesday in Redmond. (GeekWire Photo / Todd Bishop) REDMOND, Wash. — Microsoft Azure CTO Mark Russinovich cautioned that 'vibe coding' and AI-driven software development tools aren't capable of replacing human programmers for complex software projects, contrary to the industry's most optimistic aspirations for artificial intelligence. Russinovich, giving the keynote Tuesday at a Technology Alliance startup and investor event, acknowledged the effectiveness of AI coding tools for simple web applications, basic database projects, and rapid prototyping, even when used by people with little or no programming experience. However, he said these tools often break down when handling the most complex software projects that span multiple files and folders, and where different parts of the code rely on each other in complicated ways — the kinds of real-world development work that many professional developers tackle daily. 'These things are right now still beyond the capabilities of our AI systems,' he said. 'You're going to see progress made. They're going to get better. But I think that there's an upper limit with the way that autoregressive transformers work that we just won't get past.' Even five years from now, he predicted, AI systems won't be independently building complex software on the highest level, or working with the most sophisticated code bases. Instead, he said, the future lies in AI-assisted coding, where AI helps developers write code but humans maintain oversight of architecture and complex decision-making. This is more in line with Microsoft's original vision of AI as a 'Copilot,' a term that originated with the company's GitHub Copilot AI-powered coding assistant. Russinovich, a longtime Microsoft technical and cloud leader, gave an insider's overview of the AI landscape, including reasoning models that can think through complex problems before responding; the decline in unit costs for training and running AI models; and the growing importance of small language models that can run efficiently on edge devices. Mark Russinovich discusses the shift in resources from AI training to inference. (GeekWire Photo / Todd Bishop) He described the flip that has taken place in computing resources, from primarily training AI models in the past to now focusing more on AI inference, as usage of artificial intelligence has soared. He also discussed the emergence of agentic AI systems that can operate autonomously — reflecting a big push this year for Microsoft and other tech giants — as well as AI's growing contributions to scientific discoveries, such as the newly announced Microsoft Discovery. But Russinovich, who is also Azure's chief information security officer, kept gravitating back to AI limitations, offering a healthy reality check overall. He discussed his own AI safety research, including a technique that he and other Microsoft researchers developed called 'crescendo' that can trick AI models into providing information they'd otherwise refuse to give. The crescendo method works like a 'foot in the door' psychological attack, he explained, where someone starts with innocent questions about a forbidden topic and gradually pushes the AI to reveal more detailed information. Ironically, he noted, the crescendo technique was referenced in a recent research paper that made history as the first largely AI-generated research ever accepted into a tier-one scientific conference. Russinovich also delved extensively into ongoing AI hallucination problems — showing examples of Google and Microsoft Bing giving incorrect AI-generated answers to questions about the time of day in the Cook Islands, and the current year, respectively. 'AI is very unreliable. That's the takeaway here,' he said. 'And you've got to do what you can to control what goes into the model, ground it, and then also verify what comes out of the model.' Depending on the use case, Russinovich added, 'you need to be more rigorous or not, because of the implications of what's going to happen.'