Latest news with #WorldandHumanActionModel
Yahoo
16-04-2025
- Entertainment
- Yahoo
After Playing the AI Remake of Quake 2, I'm Not Worried About an AI Takeover
Microsoft recreated Quake 2 using AI but failed to deliver a smooth gameplay experience. AI-generated games lack real structure, making them rigid, unresponsive, and often random. The Quake 2 AI demo from Microsoft feels more like a hazy dream than a game. Microsoft's big Muse AI project has created a basic but impressive AI-generated version of the classic shooter game Quake 2. This project is supposed to represent an important move forward in using artificial intelligence for making video games, showing how AI could completely change how games are developed. But I've played it, and I can say that all it does is show how far the technology is from the goal Microsoft has set. The Quake 2 project uses Microsoft's World and Human Action Model (WHAM), a smart AI system designed to create game visuals on the spot and mimic how players act in real-time. WHAM works like other AI models that generate text or images, but instead of words or pictures, it's trained on huge amounts of game data. For this Quake 2 version, the AI was trained using lots of recorded gameplay from the original game. This footage helped the AI learn how the game works, how levels are built, and what the graphics look like. By studying this data, WHAM can create new frames of Quake 2 gameplay instantly as the player moves and interacts. The result is a playable experience where the AI makes every frame in real time. The controls are simple, using the keyboard to move and interact. The AI builds the game world as the player explores, making levels recognizable and new. Microsoft's main focus for this project isn't to make a game they can sell but to show off what Muse AI can do. The idea is to prove that AI could someday generate full games without needing as much manual coding and level design. Microsoft sees Muse as a tool that might help game developers in the future by speeding up early testing and content creation. The Microsoft AI-made version of the Quake 2 remake was extremely frustrating and confusing to play and nothing like the smooth, fast action of the original game. I wouldn't even call it a game, but a tech demo that runs poorly. The first big problem was the terrible frame rate—it runs so slow and choppy that it felt more like flipping through slides in a PowerPoint presentation than playing a fast shooter. Instead of moving smoothly, the character on the screen moves in a jerky, glitchy way, making even simple things like walking around feel difficult. It feels like you're driving a tank that has a slow response time for steering and stopping. I wouldn't say it's about "control," but more about guiding the game on what should happen. The graphics were just as bad. Everything looked blurry and lacked detail, worse than even much older games. Enemies are hard to make out since they're mostly just vague shapes with missing textures and stiff animations. They're more like weird ghosts than actual threats. The levels themselves are also poorly designed, with muddy, unclear textures. On top of all that, the controls didn't work properly. Basic actions like aiming and shooting often had delays or didn't work at all. There were plenty of times that I pulled the trigger, but nothing happened, and enemies just didn't take damage when they should have. Even stranger enemies would sometimes disappear during a fight, only to pop up somewhere else a second later. The whole experience was deeply confusing and unpleasant. The terrible frame rate, ugly graphics, and broken controls made the game feel unreal and hard to understand. Even simple things like walking down a hallway became frustrating because the game would change the entire level when I walked into a wall and then turned around. The Microsoft AI-made Quake 2 demo and the fan-made Minecraft AI project work similarly when it comes to creating game worlds in real time. Both use AI models trained on huge amounts of gameplay footage to build levels and game elements as you play. This is a lot like how AI Chatbots are trained. Unfortunately, because of this, they also share a big problem. The worlds they make are often random and don't make much sense, changing as users play. Neither game keeps things the same for long; instead, parts of the world pop in and out of existence or change suddenly as the player moves around. This means that nothing stays the same when the player isn't looking directly at it, which is a major flaw in this kind of system. Basically, the game world only exists where the player is currently focused, making everything feel broken and confusing. Another reason these two projects feel alike is that they both use a technique called "next-frame prediction." The AI looks at what the player is doing right now and tries to guess what should happen next without actually understanding how the game world is supposed to work. This is nothing like normal game engines, where the whole world is always there, following clear rules. Because of this, both games act in weirdly similar ways. In the Quake 2 demo, levels and enemies change completely or disappear for no reason, just like how the Minecraft AI often swaps out chunks of the world for completely different ones when the player turns away. Microsoft has plans to help make games with AI, but I'm not sure how this will help. At least with manual work, the developers can control what happens and demonstrate how the game works. Instead, this is like an AI guessing at what the developer wants it to do based on what has been shown to it over hundreds of hours of gameplay. While Microsoft's Muse AI's take on Quake 2 is an impressive technological step, all this proved was that AI is nowhere near ready to fully or even partially replace human developers when making video games. Right now, AI still struggles with keeping the world together, let alone responding to players in a natural way. These are major obstacles that would need to be overcome before AI could realistically make games with Muse like Microsoft wants. One of the biggest weaknesses of current AI is that it doesn't have real creativity or originality. Models like Muse learn from existing data—in this case, information from past games. They're good at copying styles and patterns they've seen before, but they can't come up with completely new ideas or designs. What they produce is based on what already exists, meaning it's more of a clever copy than something fresh and groundbreaking. This dependence on pre-existing material makes it hard for AI to create games that are truly innovative. While AI might be able to tweak familiar game mechanics in different ways, it can't make the kind of big, imaginative jumps that lead to truly unique and memorable gaming experiences. While AI can simulate basic player behavior, it has trouble predicting unexpected actions or adapting smoothly to different play styles. That's always been the main problem with AI. This makes the gameplay feel rigid, predictable, and ultimately less enjoyable. Without the ability to respond in a nuanced way, AI-generated games feel like a punishment to try and play through. I had a headache after the first 10-minute play session. There's also the issue of how much energy these large AI models use. Running them requires a huge amount of power, which raises concerns about how expensive they may be to operate. The high energy costs may not be worth the use Microsoft wants to bring from it. The company clearly wants this to be a tool in game development, but it doesn't feel anywhere near ready for it. The future of game development probably won't involve an AI takeover at this rate. It is so bad that I am blown away by the fact that Microsoft wanted this out. If you feel like AI will take over human jobs, play the Quake 2 AI version, and you'll feel a lot better.


The Guardian
10-03-2025
- Entertainment
- The Guardian
Are AI-generated video games really on the horizon?
Another month, another revolutionary generative AI development that will apparently fundamentally alter how an entire industry operates. This time tech giant Microsoft has created a 'gameplay ideation' tool, Muse, which it calls the world's first Wham, or World and Human Action Model. Microsoft claims that Muse will speed up the lengthy and expensive process of game development by allowing designers to play around with AI-generated gameplay videos to see what works. Muse is trained on gameplay data from UK studio Ninja Theory's game Bleeding Edge. It has absorbed tens of thousands of hours of people's real gameplay, both footage and controller inputs. It can now generate accurate-looking mock gameplay clips for that game, which can be edited and adapted with prompts. All well and good, but in an announcement video for Muse, Microsoft Gaming CEO Phil Spencer caused confusion when he said that it could be invaluable for the preservation of classic games: AI models, he implied, could 'learn' those games and emulate them on modern hardware. It's not clear how this would be possible. Further muddying the waters, Microsoft's overall CEO Satya Nadella then implied in a podcast interview that Muse was the first step in creating a 'catalogue' of AI-generated games. But Muse, as it stands, can't create a game – it can only create made-up footage of a game. So just what is this new gaming AI tool? A swish addition to game developers' tool belts? Or the first step towards an era of AI-generated gaming detritus? The idea is that designers (or indeed players) can try out ideas with Muse without spending hours (or days) in a gameplay engine implementing something that might not feel good or even work. If a designer wants to see what, say, a power-up would look like in-game, they could generate mock video showing what that might look like, with the AI filling in the gaps. 'Game engines are complicated, messy things and it takes a lot of time to simulate things – they're not built for that,' comments Julian Tongelius, associate professor in computer science and engineering at the New York University Tandon School of Engineering. '[Working with] a simulation of the game can be much easier and faster. The opportunities opened up by this kind of study are pretty big, but the limitations are also real.' AI gameplay simulations aren't totally new – Google's GameNGen project created a playable version of Doom that ran without a game engine in 2024. But the problem has always been consistency. Google's Doom simulation worked well at first, but the longer you played, the more the AI would 'dream up' game elements that weren't accurate. This is what Microsoft claims to have solved with Muse, but it comes with a massive caveat. 'This particular model is trained on 500,000 game sessions, so likely around 100,000 hours of gameplay. But it only works because you have so much data. If you move far beyond what's been recorded, simulations generally stop behaving well,' explains Tongelius. Microsoft has stated that it is already using Muse to develop real-time playable AI models trained on its other first-party games. But while Muse is great for live-service games such as Bleeding Edge, with access to thousands of hours of live gameplay, for smaller games and single-player titles, it would be a monumental and probably pointless effort to train a generative AI model in each and every specific game. Sign up to Pushing Buttons Keza MacDonald's weekly look at the world of gaming after newsletter promotion 'It's an amazing technical hurdle that they've jumped, but it kind of feels like they're going through their Zoom moment: a product coming into a market that doesn't really have a purpose,' says Ken Noland, the veteran game designer and self-described AI realist who runs AI Guys, an AI-focused co-development company. 'The technology is cool, and don't get me wrong, video generation is not an easy thing to do … I just don't see its target audience. Game developers won't be able to use it for rapid production because it doesn't actually, aside from visualising a particular thing, address any underlying game development issues.' Ultimately, there appears to be a disconnect between Spencer and Nadella's comments and what Muse actually does at the moment. Unless something significant changes, it doesn't appear capable of creating playable simulations of classic games, and it certainly doesn't create entirely 'new' AI-generated games. It isn't even clear how Muse's generated videos could be translated into actual gameplay. AI-generated video games may yet be on the horizon. Google quietly released Genie 2 a few months back, which is capable of generating 'playable worlds' – but that's not what Muse does, at least for now. 'I will choose to graciously interpret what Satya said as visions of what could be done in the future,' says Tongelius. 'It's entirely possible that we will get to some version of that, but it's not around the corner. What Microsoft has done in this paper is a foundation stone.'