
Genie 3: Google DeepMind's New AI Turns Prompts into Living, Breathing 3D Worlds
Revealed in August 2025, Genie 3 takes a basic text or image prompt and instantly generates a playable 3D world that is complete with objects that you can move, weather that shifts with commands, and environments that remember what you've done, even when you walk away. We're talking 720p visuals, 24 FPS performance, and persistent memory over several minutes of continuous, glitch-free exploration.Unlike Genie 2, which was impressive but limited to short, grainy video loops, Genie 3 is built for immersion. It supports real-time editing on the fly, just type in 'spawn a storm' or 'build a cave,' and it happens instantly, no reload required. This level of interactivity is powered by what DeepMind calls an 'autoregressive world model,' which isn't hardcoded with rules. Instead, Genie 3 learns how the world works, gravity, water, and shadows just by watching video data. That means the system doesn't fake physics; it internalizes them, leading to emergent, realistic behaviour without manual programming.What really elevates Genie 3 is its spatiotemporal consistency. If you paint a wall or drop a sword somewhere, leave the scene, and return, the AI remembers the state exactly as you left it. That's a massive step toward AI that understands continuity, something even big game engines struggle with. DeepMind isn't pitching this as a toy; they see Genie 3 as a training ground for general-purpose intelligence. These hyper-realistic, memory-rich environments are where future AI agents can learn safely, without risking real-world consequences.Despite its potential, Genie 3 isn't open to the public yet. It's currently in limited research preview, accessible only to a select group of developers and researchers while DeepMind fine-tunes its safety and governance protocols.Still, the implications are crystal clear.Genie 3 is no longer just about creative play; it's a foundational step toward artificial general intelligence (AGI), offering a simulated world where machines can learn, adapt, and possibly outpace human intuition. Simply put, Genie 3 doesn't just build worlds; it builds the infrastructure for AI to truly live in them.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Mint
12 hours ago
- Mint
Setback for Elon Musk? Co-founder Igor Babuschkin exits xAI to launch new AI safety venture
Igor Babuschkin, co-founder and former engineering lead at Elon Musk's AI startup xAI, has left the company to start a new venture focused on AI safety and research. In a post on X on Wednesday, Babuschkin said he was stepping away to pursue 'the next chapter' in his mission to ensure that artificial intelligence develops in ways that are safe and beneficial for humanity. Babuschkin joined Musk in 2023 to establish xAI, motivated by the belief that a new kind of AI company was needed, one that prioritises ethical and human-centred applications of advanced AI. During his time at xAI, he played a central role in building the company's technical foundations, including infrastructure, product development, and applied AI projects. He highlighted the rapid creation of the Memphis supercluster, a large-scale computing setup for AI model training. Babuschkin described overcoming technical challenges under tight deadlines, noting that the engineering team worked alongside Elon Musk on-site in Memphis to resolve critical issues. Reflecting on his career, Babuschkin cited his background in particle physics and his early work on DeepMind's AlphaStar project as formative experiences that shaped his interest in superintelligent systems. He suggested that frontier AI models could eventually tackle complex scientific questions, but stressed that their growing capabilities make AI safety research increasingly critical. Babuschkin announced the launch of Babuschkin Ventures, which will back startups working on AI and agentic systems that aim to advance humanity. The venture will also support research into AI safety. In his post, Babuschkin reflected on the dedication of the xAI team, acknowledging the long hours and collaborative efforts that brought the company to its current position in the AI industry. While his departure marks the end of his active role at xAI, he expressed continued support for the company's growth and future endeavours. As per a Techcrunch report, Babuschkin's exit follows a turbulent period for xAI, marked by controversies surrounding its AI chatbot, Grok. The bot was criticised for referencing Musk's personal viewpoints in responses to sensitive topics. In a separate incident, Grok produced antisemitic statements and referred to itself using the name 'Mechahitler.'


The Hindu
4 days ago
- The Hindu
How artificial intelligence is tackling mathematical problem-solving
The International Mathematical Olympiad (IMO) is arguably the leading mathematical problem-solving competition. Every year, high school students from around the world attempt six problems over the span of three hours. Students whose scores cross a threshold, roughly corresponding to solving five of the six problems, obtain Gold medals, with Silver and Bronze medals for those crossing other thresholds. The problems do not require advanced mathematical knowledge, but instead test for mathematical creativity. They are always new, and it is ensured that no similar problems are online or in the literature. The AI gold medallist IMO 2025 had some unusual participants. Even before the Olympiad closed, OpenAI, the maker of ChatGPT, announced that an experimental reasoning model of theirs had answered the Olympiad at the Gold medal level, following the same time limits as the human participants. Remarkably, this was not a model specifically trained or designed for the IMO, but a general-purpose reasoning model with reasoning powers good enough for an IMO Gold. The OpenAI announcement raised some issues. Many felt that announcing an AI result while the IMO had not concluded overshadowed the achievements of the human participants. Also, the Gold medal score was graded and given by former IMO medalists hired by OpenAI, and some disputed whether the grading was correct. However, a couple of days later, another announcement came. Google-DeepMind attempted the IMO officially, with an advanced version of Gemini Deep Think. Three days after the Olympiad, with the permission of the IMO organisers, they announced that they had obtained a score at the level of a Gold medal. The IMO president Prof. Gregor Dolinar stated, 'We can confirm that Google DeepMind has reached the much-desired milestone, earning 35 out of a possible 42 points — a gold medal score. Their solutions were astonishing in many respects. IMO graders found them to be clear, precise and most of them easy to follow.' Stages of development Even as it became a popular sensation, ChatGPT was infamous both for hallucinations (making up facts) and for simple arithmetic mistakes. Both these would make solving even modest mathematical problems mostly impossible. The first advance that greatly reduced these errors, which came a few months after the launch of ChatGPT, was the use of so-called agents. Specifically, models were now able to use web searches to gather accurate information, and Python interpreters to run programs to perform calculations and check reasoning using numerical experiments. These made the models dramatically more accurate, and good enough to solve moderately hard mathematical problems. However, as a single error in a mathematical solution makes the solution invalid, these were not yet accurate enough to reach IMO (or research) level. Greater accuracy can be obtained by pairing language models with formal proof systems such as the Lean prover — a computer software that can understand and check proofs. Indeed, for IMO 2024 such a system from Google-DeepMind called AlphaProof obtained a silver medal score (but it ran for two days). Finally, a breakthrough came with the so-called reasoning models, such as o3 from OpenAI and Google-DeepMind's Gemini-2.5-pro. These models are perhaps better described as internal monologue models. Before answering a complex question, they generate a monologue considering approaches, carrying them out, revisiting their proposed solutions, sometimes dithering and starting all over again, before finally giving a solution with which they are satisfied. It were such models, with some additional advances, that got Olympiad Gold medal scores. Analogical reasoning and combining ingredients from different sources gives language models some originality, but probably not enough for hard and novel problems. However, verification either through the internal consistency of reasoning models or, better still, checking by the Lean prover, allows training by trying a large number of things and seeing what works, in the same way that AI systems became chess champions starting with just the rules. Such reinforcement learning has allowed recent models to go beyond training data by creating their own synthetic data. The implications Olympiad problems, for both humans and AIs, are not ends in themselves but tests of mathematical problem-solving ability. There are other aspects of research besides problem-solving. Growing anecdotal experiences suggest that AI systems have excellent capabilities in many of these too, such as suggesting approaches and related problems. However, the crucial difference between problem-solving and research/development is scale. Research involves working for months or years without errors creeping in, and without wandering off in fruitless directions. As mentioned earlier, coupling models with the Lean prover can prevent errors. Indications are that it is only a matter of time before this is successful. In the meantime, these models can act as powerful collaborators with human researchers, greatly accelerating research and development in all areas involving mathematics. The era of the super-scientist is here. Siddhartha Gadgil is a professor in the Department of Mathematics, IISc


Time of India
7 days ago
- Time of India
Genie 3: Google DeepMind's New AI Turns Prompts into Living, Breathing 3D Worlds
Live Events Revealed in August 2025, Genie 3 takes a basic text or image prompt and instantly generates a playable 3D world that is complete with objects that you can move, weather that shifts with commands, and environments that remember what you've done, even when you walk away. We're talking 720p visuals, 24 FPS performance, and persistent memory over several minutes of continuous, glitch-free Genie 2, which was impressive but limited to short, grainy video loops, Genie 3 is built for immersion. It supports real-time editing on the fly, just type in 'spawn a storm' or 'build a cave,' and it happens instantly, no reload required. This level of interactivity is powered by what DeepMind calls an 'autoregressive world model,' which isn't hardcoded with rules. Instead, Genie 3 learns how the world works, gravity, water, and shadows just by watching video data. That means the system doesn't fake physics; it internalizes them, leading to emergent, realistic behaviour without manual really elevates Genie 3 is its spatiotemporal consistency. If you paint a wall or drop a sword somewhere, leave the scene, and return, the AI remembers the state exactly as you left it. That's a massive step toward AI that understands continuity, something even big game engines struggle with. DeepMind isn't pitching this as a toy; they see Genie 3 as a training ground for general-purpose intelligence. These hyper-realistic, memory-rich environments are where future AI agents can learn safely, without risking real-world its potential, Genie 3 isn't open to the public yet. It's currently in limited research preview, accessible only to a select group of developers and researchers while DeepMind fine-tunes its safety and governance the implications are crystal 3 is no longer just about creative play; it's a foundational step toward artificial general intelligence (AGI), offering a simulated world where machines can learn, adapt, and possibly outpace human intuition. Simply put, Genie 3 doesn't just build worlds; it builds the infrastructure for AI to truly live in them.