Minecraft ended virtual reality support today
Minecraft is no longer (officially) available on virtual and mixed reality platforms. The change was confirmed in today's patch notes for the game's Bedrock edition following an announcement from developer Mojang in October. Those fall patch notes suggested that the platforms would be removed in March, so players who favored VR wound up getting a few extra weeks to fully immerse themselves in their blocky worlds.
Removing entire platforms isn't a choice game devs make lightly. Especially when Minecraft 's player base still numbers in the hundreds of millions at any given time, it seems unlikely that Mojang would take away virtual and mixed reality unless it wouldn't cause a serious disruption for its many fans. There are still plenty of critically received games that make VR ownership worthwhile ( Beat Saber , anyone?), but a title as major as Minecraft abandoning the hardware isn't a great look.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
a day ago
- Yahoo
Amazon wants to become a global marketplace for AI
Amazon Web Services isn't betting on one large language model (LLM) winning the artificial intelligence race. Instead, it's offering customers a buffet of models to choose from. AWS, the cloud computing arm of Amazon (AMZN), aims to become the go-to infrastructure layer for the AI economy, regardless of which model wins out. By making customer choice a defining principle, AWS hopes to win out against rivals that have aligned closely with specific LLM providers — notably Microsoft (MSFT), which partnered with ChatGPT creator OpenAI ( 'We don't think that there's going to be one model to rule them all,' Dave Brown, vice president of compute and networking at AWS, told Yahoo Finance. The model-neutral approach is embedded into Amazon Bedrock, a service that allows AWS customers to build their own applications using a wide range of models, with more than 100 to choose from. Brown added that after Chinese startup DeepSeek surprised the world, AWS had a fully managed version of the disruptive model available on Bedrock within a week. Two years after its launch, Bedrock is now the fastest-growing service offered by AWS, which accounted for over 18% of Amazon's total revenue in the first quarter. It's why Amazon CEO Andy Jassy sees Bedrock as a core part of the company's AI growth strategy. But to understand the competitive advantage AWS hopes to offer with Bedrock, you have to go back to its origin story. Bedrock dates back to a six-page internal memo that Atul Deo, AWS's director of product management, wrote in 2020. Before OpenAI's ChatGPT launched in 2022 and made 'generative AI' a household term, Deo pitched a service that could generate code from plain English prompts using large language models. But Jassy, the head of AWS at the time, didn't buy it. 'His initial reaction was, 'This seems almost like a pipe dream,'' Deo said. He added that while a tool that makes coding easy sounds obvious now, the technology was 'still not quite there.' When that project, initially known as Code Whisperer, launched in 2023, the team realized they could offer the service for a broader set of use cases, giving customers a choice of different models with 'generic capabilities' that 'could be used as a foundation to build a lot of interesting applications,' according to Deo. Deo noted that the team steered away from doubling down on its own model after it recognized a pattern of customers wanting choice in other AWS services. This led to AWS becoming the first provider to offer a range of different models to customers. With this foundational approach in mind, Amazon renamed the project Bedrock. To be sure, the model-agnostic approach has risks, and many analysts don't consider Amazon to be leading the AI race, even though it has ramped up its AI spending. If there is ultimately one model to rule them all, similar to how Google came to dominate search, Amazon could risk further falling behind. At the beginning of the year, Amazon and its peers Meta (META), Microsoft, and Google parent Alphabet (GOOG) expected to spend $325 billion combined, mostly on AI infrastructure. To keep pace, Amazon has hedged its bets with its own technology and one LLM provider in particular: Anthropic. In November 2024, AWS doubled its investment in Anthropic to $8 billion in a deal that requires Anthropic to train its large language model, Claude, using only AWS's chips. (For comparison, Microsoft has invested over $13 billion into OpenAI.) The $8 billion deal allows Amazon to prove out its AI training infrastructure and deepen ties with one LLM provider while continuing to offer customers a wide selection of models on Bedrock. 'I mean, this is cloud selling 101, right?' said Dan Rosenthal, head of go-to-market partnerships at Anthropic. 'There are some cases where it's been very clear that a customer wants to use a different model on Bedrock for something that we just frankly don't focus on, and that's great. We want to win where we have a right to win.' Amazon also launched its own family of foundational models, called Nova, at the end of 2024, two years after the launch of ChatGPT. But competition and expectations remain high: Revenue at AWS increased 16.9% to $29.27 billion in Q1, marking the third time in a row it missed analyst estimates despite double-digit growth. The Anthropic partnership also underscores a bigger competition AWS may be fighting with chipmakers, including Nvidia (NVDA), which recently staged a $1 trillion rally in just two months after an earnings print that eased investor concerns about chip export controls. While Amazon is an Nvidia customer, it also produces highly effective and more affordable AI chips based on power consumed (known as 'price performance'). On Bedrock, AWS lets clients choose whether to use its own CPUs and GPUs or chips from competitors like Intel (INTC), AMD (AMD), and Nvidia. 'We're able to work with the model providers to really optimize the model for the hardware that it runs,' Brown said. 'There's no change the customer has to make.' Customers not only have a choice of model but also a choice of which infrastructure the model should run and train on. This helps AWS compete on price — a key battleground with Nvidia, which offers the most expensive chips on the market. This 'coopetition' dynamic could position Amazon to take market share from Nvidia if it can prove its own chips can do the job for a lower sticker price. It's a bet that Amazon is willing to spend on, with capital expenditures expected to hit $100 billion in 2025, up from $83 billion last year. While AWS doesn't break out its costs for AI, CEO Andy Jassy said on an earnings call in February that the 'vast majority of that capex spend is on AI for AWS.' In an April letter to shareholders, Jassy noted that 'AI revenue is growing at triple-digit YoY percentages and represents a multibillion-dollar annual revenue run rate.' Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data
Yahoo
a day ago
- Yahoo
Amazon wants to become a global marketplace for AI
Amazon Web Services isn't betting on one large language model (LLM) winning the artificial intelligence race. Instead, it's offering customers a buffet of models to choose from. AWS, the cloud computing arm of Amazon (AMZN), aims to become the go-to infrastructure layer for the AI economy, regardless of which model wins out. By making customer choice a defining principle, AWS hopes to win out against rivals that have aligned closely with specific LLM providers — notably Microsoft (MSFT), which partnered with ChatGPT creator OpenAI ( 'We don't think that there's going to be one model to rule them all,' Dave Brown, vice president of compute and networking at AWS, told Yahoo Finance. The model-neutral approach is embedded into Amazon Bedrock, a service that allows AWS customers to build their own applications using a wide range of models, with more than 100 to choose from. Brown added that after Chinese startup DeepSeek surprised the world, AWS had a fully managed version of the disruptive model available on Bedrock within a week. Two years after its launch, Bedrock is now the fastest-growing service offered by AWS, which accounted for over 18% of Amazon's total revenue in the first quarter. It's why Amazon CEO Andy Jassy sees Bedrock as a core part of the company's AI growth strategy. But to understand the competitive advantage AWS hopes to offer with Bedrock, you have to go back to its origin story. Bedrock dates back to a six-page internal memo that Atul Deo, AWS's director of product management, wrote in 2020. Before OpenAI's ChatGPT launched in 2022 and made 'generative AI' a household term, Deo pitched a service that could generate code from plain English prompts using large language models. But Jassy, the head of AWS at the time, didn't buy it. 'His initial reaction was, 'This seems almost like a pipe dream,'' Deo said. He added that while a tool that makes coding easy sounds obvious now, the technology was 'still not quite there.' When that project, initially known as Code Whisperer, launched in 2023, the team realized they could offer the service for a broader set of use cases, giving customers a choice of different models with 'generic capabilities' that 'could be used as a foundation to build a lot of interesting applications,' according to Deo. Deo noted that the team steered away from doubling down on its own model after it recognized a pattern of customers wanting choice in other AWS services. This led to AWS becoming the first provider to offer a range of different models to customers. With this foundational approach in mind, Amazon renamed the project Bedrock. To be sure, the model-agnostic approach has risks, and many analysts don't consider Amazon to be leading the AI race, even though it has ramped up its AI spending. If there is ultimately one model to rule them all, similar to how Google came to dominate search, Amazon could risk further falling behind. At the beginning of the year, Amazon and its peers Meta (META), Microsoft, and Google parent Alphabet (GOOG) expected to spend $325 billion combined, mostly on AI infrastructure. To keep pace, Amazon has hedged its bets with its own technology and one LLM provider in particular: Anthropic. In November 2024, AWS doubled its investment in Anthropic to $8 billion in a deal that requires Anthropic to train its large language model, Claude, using only AWS's chips. (For comparison, Microsoft has invested over $13 billion into OpenAI.) The $8 billion deal allows Amazon to prove out its AI training infrastructure and deepen ties with one LLM provider while continuing to offer customers a wide selection of models on Bedrock. 'I mean, this is cloud selling 101, right?' said Dan Rosenthal, head of go-to-market partnerships at Anthropic. 'There are some cases where it's been very clear that a customer wants to use a different model on Bedrock for something that we just frankly don't focus on, and that's great. We want to win where we have a right to win.' Amazon also launched its own family of foundational models, called Nova, at the end of 2024, two years after the launch of ChatGPT. But competition and expectations remain high: Revenue at AWS increased 16.9% to $29.27 billion in Q1, marking the third time in a row it missed analyst estimates despite double-digit growth. The Anthropic partnership also underscores a bigger competition AWS may be fighting with chipmakers, including Nvidia (NVDA), which recently staged a $1 trillion rally in just two months after an earnings print that eased investor concerns about chip export controls. While Amazon is an Nvidia customer, it also produces highly effective and more affordable AI chips based on power consumed (known as 'price performance'). On Bedrock, AWS lets clients choose whether to use its own CPUs and GPUs or chips from competitors like Intel (INTC), AMD (AMD), and Nvidia. 'We're able to work with the model providers to really optimize the model for the hardware that it runs,' Brown said. 'There's no change the customer has to make.' Customers not only have a choice of model but also a choice of which infrastructure the model should run and train on. This helps AWS compete on price — a key battleground with Nvidia, which offers the most expensive chips on the market. This 'coopetition' dynamic could position Amazon to take market share from Nvidia if it can prove its own chips can do the job for a lower sticker price. It's a bet that Amazon is willing to spend on, with capital expenditures expected to hit $100 billion in 2025, up from $83 billion last year. While AWS doesn't break out its costs for AI, CEO Andy Jassy said on an earnings call in February that the 'vast majority of that capex spend is on AI for AWS.' In an April letter to shareholders, Jassy noted that 'AI revenue is growing at triple-digit YoY percentages and represents a multibillion-dollar annual revenue run rate.' Sign in to access your portfolio

CNBC
2 days ago
- CNBC
Sam Altman brings his eye-scanning identity verification startup to the UK
LONDON — World, the biometric identity verification project co-founded by OpenAI CEO Sam Altman, is set to launch in the U.K. this week. The venture, which uses a spherical eye-scanning device called the Orb to scan people's eyes, will become available in London from Thursday and is planning to roll out to several other major U.K. cities — including Manchester, Birmingham, Cardiff, Belfast, and Glasgow — in the coming months. The project aims to authenticate the identity of humans with its Orb device and prevent the fraudulent abuse of artificial intelligence systems like deep fakes. It works by scanning a person's face and iris and then creating a unique code to verify that the individual is a human and not an AI. Once someone has created their iris code, they are then gifted some of World's WLD cryptocurrency and can use an anonymous identifier called World ID to sign into various applications. It currently works with the likes of Minecraft, Reddit and Discord. Adrian Ludwig, chief architect of Tools for Humanity, which is a core contributor to World, told CNBC on a call that the project is seeing significant demand from both enterprise users and governments as the threat of AI to defraud various services — from banking to online gaming — grows. "The idea is no longer just something that's theoretical. It's something that's real and affecting them every single day," he said, adding that World is now transitioning "from science project to a real network." The venture recently opened up shop in the U.S. with six flagship retail locations including Austin, Atlanta, Los Angeles, Nashville, Miami and San Francisco. Ludwig said that looking ahead, the plan is to "increase the number of people who can be verified by an order of magnitude over the next few months." Ever since its initial launch as "Worldcoin" in 2021, Altman's World has been plagued by concerns over how it could affect users' privacy. The startup says it addresses these concerns by encrypting the biometric data collected and ensuring the original data is deleted. On top of that, World's verification system also depends on a decentralized network of users' smartphones rather than the cloud to carry out individual identity checks. Still, this becomes harder to do in a network with billions of users like Facebook or TikTok, for example. For now, World has 13 million verified users and is planning to scale that up. Ludwig argues World is a scalable network as all of the computation and storage is processed locally on a user's device — it's only the infrastructure for confirming someone's uniqueness that is handled by third-party providers. Ludwig says the way technology is evolving means it's getting much easier for new AI systems to bypass currently available authentication methods such as facial recognition and CAPTCHA bot prevention measures. He sees World serving a pertinent need in the transition from physical to digital identity systems. Governments are exploring digital ID schemes to move away from physical cards. However, so far, these attempts have been far from perfect. One example of a major digital identity system is India's Aadhaar. Although the initiative has seen widespread adoption, it has also been the target of criticisms for lax security and allegedly worsening social inequality for Indians. "We're beginning to see governments now more interested in how can we use this as a mechanism to improve our identity infrastructure," Ludwig told CNBC. "Mechanisms to identify and reduce fraud is of interest to governments." The technologist added that World has been talking to various regulators about its identity verification solution — including the Information Commissioner's Office, which oversees data protection in the U.K. "We've been having lots of conversations with regulators," Ludwig told CNBC. "In general, there's been lots of questions: how do we make sure this works? How do we protect privacy? If we engage with this, does it expose us to risks?" "All of those questions we've been able to answer," he added. "It's been a while since we've had a question asked we didn't have an answer to."