logo
#

Latest news with #GoogleforDevelopers

High school maths trumps Olympiad gold medalist AI models: Google Deepmind CEO answers why
High school maths trumps Olympiad gold medalist AI models: Google Deepmind CEO answers why

Economic Times

timea day ago

  • Business
  • Economic Times

High school maths trumps Olympiad gold medalist AI models: Google Deepmind CEO answers why

Google Deepmind chief executive Demis Hassabis said that advanced AI models like Gemini can surpass benchmarks like the International Mathematical Olympiad (IMO) but struggle with basic high school maths problems due to inconsistencies. "The lack of consistency in AI is a major barrier to achieving artificial general intelligence (AGI), " he said on the "Google for Developers" podcast, adding that it is a major roadblock in the journey. Artificial general intelligence, or AGI, is generally understood as software that has the general cognitive abilities of human beings and can perform any task that a human can. He also referred to Google CEO Sundar Pichai's description of the current state of AI as "AJI", or artificial jagged intelligence, where systems excel in certain tasks but fail in others. Road towards AGI The Deepmind CEO said just increasing data and computing power won't suffice to solve the problem at highlighted that rigorous testing and challenging benchmarks can precisely measure an AI model's accurate progress."We need better testing and new, more challenging benchmarks to determine precisely what the models excel at and what they don't." Also Read: AI helps Big Tech score big numbers Not just Google ET reported that artificial intelligence (AI) agents, hailed as the "next big thing" by major tech players like Google, OpenAI, and Anthropic, are expected to be a major focus and trend this year. OpenAI launched Operator, its first AI agent, in January this year, for Pro users across multiple regions, including Australia, Brazil, Canada, India, Japan, Singapore, South Korea, the UK, and most places where ChatGPT is October, Anthropic launched an upgraded version of its Claude 3.5 Sonnet model, which can interact with any desktop application. This AI agent can perform desktop-level commands and browse the web to complete tasks. Also Read: ETtech Explainer | Artificial general intelligence: an enabler or a destroyer

Google DeepMind CEO: AGI Still Years Away as AI Struggles with Simple Mistakes
Google DeepMind CEO: AGI Still Years Away as AI Struggles with Simple Mistakes

Hans India

timea day ago

  • Business
  • Hans India

Google DeepMind CEO: AGI Still Years Away as AI Struggles with Simple Mistakes

Big Tech giants like Google, Meta, and OpenAI are locked in a high-stakes race to develop artificial general intelligence (AGI) — AI systems capable of thinking, planning, and adapting on par with humans. But according to Google DeepMind CEO Demis Hassabis, that goal remains distant, as current AI still makes surprisingly simple errors despite impressive achievements. Speaking on the Google for Developers podcast, Hassabis described today's AI as having 'jagged intelligence' — excelling in certain domains but stumbling in basic ones. He cited Google's latest Gemini model, enhanced with DeepThink reasoning technology, which has reached gold-medal-level performance in the International Mathematical Olympiad — one of the toughest math competitions worldwide. Yet, that same model can still make avoidable mistakes in high school-level math or fail at simple games. 'It shouldn't be that easy for the average person to just find a trivial flaw in the system,' Hassabis remarked. This inconsistency, he explained, is a sign that AI is far from human-level intelligence. He argued that simply scaling up models with more data and computing power will not bridge the gap to AGI. Instead, fundamental capabilities like reasoning, planning, and memory — areas still underdeveloped in even the most advanced AI — must be strengthened. Another challenge, Hassabis noted, is the lack of rigorous testing. Many standard AI benchmarks are already saturated, creating the illusion of near-perfect performance while masking weaknesses. For example, Gemini models recently scored 99.2% on the AIME mathematics benchmark, leaving minimal room for measurable improvement. However, these results don't necessarily mean the model is flawless. To overcome this, Hassabis called for 'new, harder benchmarks' that go beyond academic problem-solving to include intuitive physics, real-world reasoning, and 'physical intelligence' — the ability to understand and interact with the physical world as humans do. He also stressed the need for robust safety benchmarks capable of detecting risks such as deceptive behavior in AI systems. 'We're in need of new, harder benchmarks, but also broader ones, in my opinion — understanding world physics and intuitive physics and other things that we take for granted as humans,' he said. While Hassabis has previously suggested AGI might arrive within five to ten years, he now emphasizes caution. He believes AI companies should first focus on perfecting existing models before chasing full AGI. The path ahead, he implied, is less about winning a race and more about ensuring AI's capabilities are reliable, safe, and truly intelligent across the board. For now, despite breakthroughs in reasoning and problem-solving, the dream of AI that matches human intelligence remains a work in progress — and one that may take longer than the industry's most optimistic predictions.

AGI when? Google DeepMind CEO says AI still makes simple mistakes despite big wins in elite math
AGI when? Google DeepMind CEO says AI still makes simple mistakes despite big wins in elite math

India Today

timea day ago

  • Business
  • India Today

AGI when? Google DeepMind CEO says AI still makes simple mistakes despite big wins in elite math

Big Tech giants like Meta, OpenAI, and Google are racing to build artificial general intelligence (AGI). It's the AI systems capable of thinking, planning, and adapting like humans. These companies are pouring billions into research and aggressively recruiting, even poaching, top talent to assemble the best teams. But Google DeepMind CEO Demis Hassabis believes true AGI is still years away, as the AI industry remains far from perfecting current models, let alone achieving human-level on the latest episode of the Google for Developers podcast, Hassabis said even the most advanced AI models today display 'jagged intelligence'. Meaning that, while they excel in some areas, they still stumble on basic tasks. He cited examples from Google's Gemini models, highlighting that Google's latest and most powerful Gemini AI model, which incorporates the company's DeepThink reasoning technique, can achieve gold-medal-level performance at the International Mathematical Olympiad, one of the toughest competitions in the world. Yet, he noted, those same models can still make avoidable errors in high school-level mathematics or fail at simple games.'It shouldn't be that easy for the average person to just find a trivial flaw in the system,' Hassabis said. Hassabis argued that bridging the gap to AGI will require more than simply scaling up models with additional data and computing power. In his view, companies will need to focus on fundamental capabilities, particularly reasoning, planning, and memory, which remain underdeveloped in current AI missing piece, he added, is the lack of robust testing. While many standard benchmarks are already saturated, giving the impression of near-perfect performance, he suggested they often fail to expose weaknesses. For example, Hassabis noted that Gemini models recently scored 99.2 per cent on the AIME mathematics benchmark, leaving little room for measurable improvement, even though the model still has address this, Hassabis said companies need 'new, harder benchmarks' — not only in academic problem-solving, but also in areas such as intuitive physics, real-world reasoning, and 'physical intelligence'. He also emphasised the importance of safety benchmarks to detect traits such as deception. 'We're in need of new, harder benchmarks, but also broader ones, in my opinion — understanding world physics and intuitive physics and other things that we take for granted as humans,' he Hassabis has previously predicted that AGI could arrive within five to ten years, but cautioned that current systems, from Gemini to OpenAI's latest GPT-5, still lack critical capabilities. He stressed that the focus of AI companies should first be on perfecting today's AI models before pursuing full AGI.- Ends

Google's DeepMind CEO exposes the shocking flaw holding AI back from full AGI - here are the details
Google's DeepMind CEO exposes the shocking flaw holding AI back from full AGI - here are the details

Economic Times

time2 days ago

  • Business
  • Economic Times

Google's DeepMind CEO exposes the shocking flaw holding AI back from full AGI - here are the details

Even though AI can solve world-class math problems, it still struggles with high school-level equations. That's the paradox that Google DeepMind CEO Demis Hassabis claims is preventing AI from becoming fully AGI. The culprit is apparently an unexpected lack of consistency. Demis Hassabis believes that the most significant barrier to full AGI is AI's inconsistency in reasoning and problem-solving. Fixing the flaw will necessitate advances in reasoning, planning, and memory, not just more data or computing power. Demis Hassabis stated that "some missing capabilities in reasoning and planning in memory" need to be fixed. Hassabis stated in a Tuesday episode of the "Google for Developers" podcast that even sophisticated models such as Google's Gemini still make mistakes on problems that most schoolchildren could figure out, as quoted in a report by Business Insider. ALSO READ: Taylor Swift new album release date: When are the new playlists coming out - here are key details 'It shouldn't be that easy for the average person to just find a trivial flaw in the system," he stated. He cited Gemini models that have been improved with DeepThink, a method that improves reasoning, and that have the potential to take home gold at the International Mathematical Olympiad, the most prominent math competition in the world. The same systems, he claimed, can "still make simple mistakes in high school maths," and he referred to them as "uneven intelligences" or "jagged intelligences,' as quoted in a report by Business Insider."Some dimensions, they're really good; other dimensions, their weaknesses can be exposed quite easily," he stated. It's not just about getting more data and computing power to get to Artificial General Intelligence (AGI), which is the point at which AI can think and reason like a person in all areas. Hassabis thinks that the missing piece is to make reasoning, planning, and memory better, as quoted in a report by Business Insider. AI's weaknesses can make its strengths less useful if they aren't consistent. For example, an AI that can solve graduate-level physics problems but not basic algebra is not really intelligent in the same way that people are. Google CEO Sundar Pichai came up with the term "AJI", Artificial Jagged Intelligence, to describe the current state of the technology because it isn't balanced, as quoted in a report by Business Insider. Hassabis says the industry needs "new, harder benchmarks" to thoroughly test AI's strengths and weaknesses and make sure it works well on all kinds of Hassabis is still hopeful, though, and thinks that AGI could be here in five to ten years. He does, however, stress that Big Tech still hasn't figured it out. Before the launch of GPT-5, OpenAI CEO Sam Altman said something similar. He said that GPT-5 is a big step forward, but it is not yet true AGI. Altman said that one major gap is that AI can't keep learning on its own from new information it comes across in real time. ALSO READ: Orca attack mystery: What really happened to marine trainer Jessica Radcliffe Both leaders agree that the next big steps forward in AI won't just come from bigger models. They will also come from smarter models that have the kind of balanced, adaptable intelligence that people take for granted. What is preventing AI from achieving AGI?AI lacks consistency, excelling at complex tasks while failing at simpler ones. How would Demis Hassabis describe current AI? He refers to it as "jagged intelligence" because it is strong in some areas but weak in others.

Google's DeepMind CEO exposes the shocking flaw holding AI back from full AGI - here are the details
Google's DeepMind CEO exposes the shocking flaw holding AI back from full AGI - here are the details

Time of India

time2 days ago

  • Business
  • Time of India

Google's DeepMind CEO exposes the shocking flaw holding AI back from full AGI - here are the details

Google DeepMind CEO Demis Hassabis identifies AI's inconsistent reasoning as a major obstacle to achieving Artificial General Intelligence (AGI). Despite excelling at complex problems, AI falters on simple tasks, revealing "jagged intelligence." Hassabis emphasizes the need for advancements in reasoning, planning, and memory, rather than solely relying on increased data and computing power, to bridge this gap. Tired of too many ads? Remove Ads What does 'jagged intelligence' mean in AI? Tired of too many ads? Remove Ads Why is it so important for AGI to be consistent? How far away are we from getting real AGI? Tired of too many ads? Remove Ads FAQs Even though AI can solve world-class math problems, it still struggles with high school-level equations. That's the paradox thatclaims is preventing AI from becoming fully AGI. The culprit is apparently an unexpected lack of Hassabis believes that the most significant barrier to full AGI isin reasoning and problem-solving. Fixing the flaw will necessitate advances in reasoning, planning, and memory, not just more data or computing Hassabis stated that "some missing capabilities in reasoning and planning in memory" need to be fixed. Hassabis stated in a Tuesday episode of the "Google for Developers" podcast that even sophisticated models such as Google's Gemini still make mistakes on problems that most schoolchildren could figure out, as quoted in a report by Business Insider.'It shouldn't be that easy for the average person to just find a trivial flaw in the system," he cited Gemini models that have been improved with DeepThink, a method that improves reasoning, and that have the potential to take home gold at the International Mathematical Olympiad, the most prominent math competition in the same systems, he claimed, can "still make simple mistakes in high school maths," and he referred to them as "uneven intelligences" or "jagged intelligences,' as quoted in a report by Business Insider."Some dimensions, they're really good; other dimensions, their weaknesses can be exposed quite easily," he not just about getting more data and computing power to get to Artificial General Intelligence (AGI), which is the point at which AI can think and reason like a person in all areas. Hassabis thinks that the missing piece is to make reasoning, planning, and memory better, as quoted in a report by Business weaknesses can make its strengths less useful if they aren't consistent. For example, an AI that can solve graduate-level physics problems but not basic algebra is not really intelligent in the same way that people are. Google CEO Sundar Pichai came up with the term "AJI", Artificial Jagged Intelligence , to describe the current state of the technology because it isn't balanced, as quoted in a report by Business says the industry needs "new, harder benchmarks" to thoroughly test AI's strengths and weaknesses and make sure it works well on all kinds of Hassabis is still hopeful, though, and thinks that AGI could be here in five to ten years. He does, however, stress that Big Tech still hasn't figured it the launch of GPT-5 OpenAI CEO Sam Altman said something similar. He said that GPT-5 is a big step forward, but it is not yet true AGI. Altman said that one major gap is that AI can't keep learning on its own from new information it comes across in real leaders agree that the next big steps forward in AI won't just come from bigger models. They will also come from smarter models that have the kind of balanced, adaptable intelligence that people take for lacks consistency, excelling at complex tasks while failing at simpler refers to it as "jagged intelligence" because it is strong in some areas but weak in others.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store