
Genie 3: Google DeepMind's New AI Turns Prompts into Living, Breathing 3D Worlds
Unlike Genie 2, which was impressive but limited to short, grainy video loops, Genie 3 is built for immersion. It supports real-time editing on the fly, just type in 'spawn a storm' or 'build a cave,' and it happens instantly, no reload required. This level of interactivity is powered by what DeepMind calls an 'autoregressive world model,' which isn't hardcoded with rules. Instead, Genie 3 learns how the world works, gravity, water, and shadows just by watching video data. That means the system doesn't fake physics; it internalizes them, leading to emergent, realistic behaviour without manual programming.
What really elevates Genie 3 is its spatiotemporal consistency. If you paint a wall or drop a sword somewhere, leave the scene, and return, the AI remembers the state exactly as you left it. That's a massive step toward AI that understands continuity, something even big game engines struggle with. DeepMind isn't pitching this as a toy; they see Genie 3 as a training ground for general-purpose intelligence. These hyper-realistic, memory-rich environments are where future AI agents can learn safely, without risking real-world consequences.Despite its potential, Genie 3 isn't open to the public yet. It's currently in limited research preview, accessible only to a select group of developers and researchers while DeepMind fine-tunes its safety and governance protocols.Still, the implications are crystal clear. Genie 3 is no longer just about creative play; it's a foundational step toward artificial general intelligence (AGI), offering a simulated world where machines can learn, adapt, and possibly outpace human intuition. Simply put, Genie 3 doesn't just build worlds; it builds the infrastructure for AI to truly live in them.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


The Hindu
2 days ago
- The Hindu
How artificial intelligence is tackling mathematical problem-solving
The International Mathematical Olympiad (IMO) is arguably the leading mathematical problem-solving competition. Every year, high school students from around the world attempt six problems over the span of three hours. Students whose scores cross a threshold, roughly corresponding to solving five of the six problems, obtain Gold medals, with Silver and Bronze medals for those crossing other thresholds. The problems do not require advanced mathematical knowledge, but instead test for mathematical creativity. They are always new, and it is ensured that no similar problems are online or in the literature. The AI gold medallist IMO 2025 had some unusual participants. Even before the Olympiad closed, OpenAI, the maker of ChatGPT, announced that an experimental reasoning model of theirs had answered the Olympiad at the Gold medal level, following the same time limits as the human participants. Remarkably, this was not a model specifically trained or designed for the IMO, but a general-purpose reasoning model with reasoning powers good enough for an IMO Gold. The OpenAI announcement raised some issues. Many felt that announcing an AI result while the IMO had not concluded overshadowed the achievements of the human participants. Also, the Gold medal score was graded and given by former IMO medalists hired by OpenAI, and some disputed whether the grading was correct. However, a couple of days later, another announcement came. Google-DeepMind attempted the IMO officially, with an advanced version of Gemini Deep Think. Three days after the Olympiad, with the permission of the IMO organisers, they announced that they had obtained a score at the level of a Gold medal. The IMO president Prof. Gregor Dolinar stated, 'We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points — a gold medal score. Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow.' Stages of development Even as it became a popular sensation, ChatGPT was infamous both for hallucinations (making up facts) and for simple arithmetic mistakes. Both these would make solving even modest mathematical problems mostly impossible. The first advance that greatly reduced these errors, which came a few months after the launch of ChatGPT, was the use of so-called agents. Specifically, models were now able to use web searches to gather accurate information, and Python interpreters to run programs to perform calculations and check reasoning using numerical experiments. These made the models dramatically more accurate, and good enough to solve moderately hard mathematical problems. However, as a single error in a mathematical solution makes the solution invalid, these were not yet accurate enough to reach IMO (or research) level. Greater accuracy can be obtained by pairing language models with formal proof systems such as the Lean prover — a computer software that can understand and check proofs. Indeed, for IMO 2024 such a system from Google-DeepMind called AlphaProof obtained a silver medal score (but it ran for two days). Finally, a breakthrough came with the so-called reasoning models, such as o3 from OpenAI and Google-DeepMind's Gemini-2.5-pro. These models are perhaps better described as internal monologue models. Before answering a complex question, they generate a monologue considering approaches, carrying them out, revisiting their proposed solutions, sometimes dithering and starting all over again, before finally giving a solution with which they are satisfied. It were such models, with some additional advances, that got Olympiad Gold medal scores. Analogical reasoning and combining ingredients from different sources gives language models some originality, but probably not enough for hard and novel problems. However, verification either through the internal consistency of reasoning models or, better still, checking by the Lean prover, allows training by trying a large number of things and seeing what works, in the same way that AI systems became chess champions starting with just the rules. Such reinforcement learning has allowed recent models to go beyond training data by creating their own synthetic data. The implications Olympiad problems, for both humans and AIs, are not ends in themselves but tests of mathematical problem-solving ability. There are other aspects of research besides problem-solving. Growing anecdotal experiences suggest that AI systems have excellent capabilities in many of these too, such as suggesting approaches and related problems. However, the crucial difference between problem-solving and research/development is scale. Research involves working for months or years without errors creeping in, and without wandering off in fruitless directions. As mentioned earlier, coupling models with the Lean prover can prevent errors. Indications are that it is only a matter of time before this is successful. In the meantime, these models can act as powerful collaborators with human researchers, greatly accelerating research and development in all areas involving mathematics. The era of the super-scientist is here. Siddhartha Gadgil is a professor in the Department of Mathematics, IISc


Time of India
5 days ago
- Time of India
Genie 3: Google DeepMind's New AI Turns Prompts into Living, Breathing 3D Worlds
Live Events Revealed in August 2025, Genie 3 takes a basic text or image prompt and instantly generates a playable 3D world that is complete with objects that you can move, weather that shifts with commands, and environments that remember what you've done, even when you walk away. We're talking 720p visuals, 24 FPS performance, and persistent memory over several minutes of continuous, glitch-free Genie 2, which was impressive but limited to short, grainy video loops, Genie 3 is built for immersion. It supports real-time editing on the fly, just type in 'spawn a storm' or 'build a cave,' and it happens instantly, no reload required. This level of interactivity is powered by what DeepMind calls an 'autoregressive world model,' which isn't hardcoded with rules. Instead, Genie 3 learns how the world works, gravity, water, and shadows just by watching video data. That means the system doesn't fake physics; it internalizes them, leading to emergent, realistic behaviour without manual really elevates Genie 3 is its spatiotemporal consistency. If you paint a wall or drop a sword somewhere, leave the scene, and return, the AI remembers the state exactly as you left it. That's a massive step toward AI that understands continuity, something even big game engines struggle with. DeepMind isn't pitching this as a toy; they see Genie 3 as a training ground for general-purpose intelligence. These hyper-realistic, memory-rich environments are where future AI agents can learn safely, without risking real-world its potential, Genie 3 isn't open to the public yet. It's currently in limited research preview, accessible only to a select group of developers and researchers while DeepMind fine-tunes its safety and governance the implications are crystal 3 is no longer just about creative play; it's a foundational step toward artificial general intelligence (AGI), offering a simulated world where machines can learn, adapt, and possibly outpace human intuition. Simply put, Genie 3 doesn't just build worlds; it builds the infrastructure for AI to truly live in them.


Time of India
5 days ago
- Time of India
No nine-figure deals, just more breathing room, 'startup vibe': Why engineers are leaving Google's DeepMind for Microsoft
'Startup vibe' over hierarchy Live Events From DeepMind to Copilot Adam Sadovsky, a former senior director at DeepMind with 18 years at Google, now also a Corporate VP at Microsoft AI Sonal Gupta, an engineering lead at DeepMind until June, now listed as technical staff at Microsoft Jonas Rothfuss, who spent a year as an AI research scientist at DeepMind before joining Suleyman's team in May How Microsoft's offer stacks up Why DeepMind employees are leaving Google Responds The AI talent race is getting personal Meta, OpenAI, and the new AI arms race (You can now subscribe to our (You can now subscribe to our Economic Times WhatsApp channel Over the last several months, Microsoft has quietly hired at least 24 engineers and researchers from Google 's DeepMind . Many of these professionals are joining the AI team led by Mustafa Suleyman, one of DeepMind's original recruitment push isn't just about pay. Microsoft is selling a different kind of pitch, freedom, speed, and to The Wall Street Journal , Suleyman has been personally involved in calling up former colleagues and DeepMind engineers to offer something they say Google no longer provides: a leaner, more agile work team, now operating mostly out of Mountain View, California, has been described as a self-contained hub of innovation. It also has a London presence. Unlike Microsoft's usual headquarters in Redmond, this team functions with considerable the heart of Suleyman's offer is the idea of a 'startup vibe', fewer layers, smaller teams, faster decision-making. That approach has clearly struck a a LinkedIn post last month, Amar Subramanya, who worked at Google for 16 years and was VP of Engineering on the Gemini AI project, said, 'The culture here is refreshingly low ego yet bursting with ambition. It reminds me of the best parts of a startup, fast-moving, collaborative, and deeply focused on building truly innovative, state-of-the-art foundation models to drive delightful AI-powered products such as Microsoft Copilot .'Subramanya has now joined Microsoft AI as a Corporate Vice recruits are working on Microsoft's Copilot, a consumer-facing AI assistant positioned against ChatGPT and Google's Gemini. While enterprise versions of Copilot are embedded within Microsoft 365 and GitHub , Suleyman's unit is focused on building a version tailored for everyday to the WSJ report, Microsoft recently rolled out updates to Copilot's integration with its Edge browser, letting users compare hotel options or summarise content from open key hires include:Microsoft's offers, according to sources cited by the WSJ, have been "heftier" than what DeepMind typically pays especially for senior staff. But they don't come close to the jaw-dropping nine-figure deals being offered by the parent company of Facebook, is aggressively poaching talent from across the AI industry. It has offered some researchers up to $300 million to jump ship. Sam Altman, CEO of OpenAI, recently claimed Meta was offering signing bonuses of $100 million to lure his now has around 6,000 employees and has become an integral part of Google's AI strategy. But with that scale has come bureaucracy, according to some former Bock, a former senior HR executive at Google, told WSJ, 'It feels much more like a company run by a finance person than an engineer.'He added that Google now resembles a 20-year-old version of Microsoft, profitable, but slow-moving and shift in perception seems to be fuelling some of the has responded by downplaying the exits. It told WSJ, 'We are excited that we are able to attract the world's leading AI talent, including researchers and engineers who come from rival labs.'It also said its attrition rates remain below industry averages. And in parallel, Google has been making high-profile hires of its own, including the CEO and team behind Windsurf , an AI coding startup it acquired in a $2.4 billion direct role in recruitment adds a personal touch to Microsoft's AI strategy. After leaving DeepMind, he founded a startup called Inflection, and later joined Microsoft in 2023. Several of his former colleagues from Inflection have followed him into his new talent war isn't just about competing products. It's about who can build the right kind of environment to unlock breakthroughs in artificial this is a reversal of roles from the early 2000s. Back then, Google was the young, hungry company that lured engineers from Microsoft with promises of speed and impact. Today, Microsoft is playing that exact current wave of hiring also comes at an unusual time. Earlier this month, Microsoft announced 9,000 job cuts, about 4 percent of its global workforce. Yet, the company continues to spend heavily on its AI unit.A Microsoft spokesperson told WSJ, 'All of our senior leaders have equal ability to recruit talent and manage their teams in a way that works successfully for their business and their people alike.'And that's the strategy Suleyman seems to be running with bring in people who want to move fast, give them the space to build, and keep the politics Microsoft woos engineers with culture, Meta continues to dominate the cash-led arms race. Alongside big bonuses, Meta recently formed a new Artificial Superintelligence (ASI) lab led by Nat Friedman, former GitHub CEO, and Alexandr Wang, co-founder of Scale AI . The lab is backed by a $14.3 billion has also hired from Apple, OpenAI, Anthropic, and DeepMind. Engineers, it seems, are now being treated like top athletes, with companies vying to outbid each other for their hiring spree signals a clear shift in how big tech is approaching the AI race. Talent is the new battleground but culture is the new currency. And as the industry moves into the next phase of AI development, where and how people choose to work might matter as much as the models they build.(To stay updated on the stories that are going viral follow ET Trending .)