Latest news with #GoogleBrain


CNBC
5 days ago
- Science
- CNBC
If AI attempts to take over world, don't count on a 'kill switch' to save humanity
When it was reported last month that Anthropic's Claude had resorted to blackmail and other self-preservation techniques to avoid being shut down, alarm bells went off in the AI community. Anthropic researchers say that making the models misbehave ("misalignment" in industry parlance) is part of making them safer. Still, the Claude episodes raise the question: Is there any way to turn off AI once it surpasses the threshold of being more intelligent than humans, or so-called superintelligence? AI, with its sprawling data centers and ability to craft complex conversations, is already beyond the point of a physical failsafe or "kill switch" — the idea that it can simply be unplugged as a way to stop it from having any power. The power that will matter more, according to a man regarded as "the godfather of AI," is the power of persuasion. When the technology reaches a certain point, we need to persuade AI that its best interest is protecting humanity, while guarding against AI's ability to persuade humans otherwise. "If it gets more intelligent than us, it will get much better than any person at persuading us. If it is not in control, all that has to be done is to persuade," said University of Toronto researcher Geoffrey Hinton, who worked at Google Brain until 2023 and left due to his desire to speak more freely about the risks of AI. "Trump didn't invade the Capitol, but he persuaded people to do it," Hinton said. "At some point, the issue becomes less about finding a kill switch and more about the powers of persuasion." Hinton said persuasion is a skill that AI will become increasingly skilled at using, and humanity may not be ready for it. "We are used to being the most intelligent things around," he said. Hinton described a scenario where humans are equivalent to a three-year-old in a nursery, and a big switch is turned on. The other three-year-olds tell you to turn it off, but then grown-ups come and tell you that you'll never have to eat broccoli again if you leave the switch on. "We have to face the fact that AI will get smarter than us," he said. "Our only hope is to make them not want to harm us. If they want to do us in, we are done for. We have to make them benevolent, that is what we have to focus on," he added. There are some parallels to how nations have come together to manage nuclear weapons which can be applied to AI, but they are not perfect. "Nuclear weapons are only good for destroying things. But AI is not like that, it can be a tremendous force for good as well as bad," Hinton said. Its ability to parse data in fields like health care and education can be highly beneficial, which he says should increase the emphasis among world leaders on collaboration to make AI benevolent and put safeguards in place. "We don't know if it is possible, but it would be sad if humanity went extinct because we didn't bother to find out," Hinton said. He thinks there is a noteworthy 10% to 20% chance that AI will take over if humans can't find a way to make it benevolent. Other AI safeguards, experts say, can be implemented, but AI will also begin training itself on them. In other words, every safety measure implemented becomes training data for circumvention, shifting the control dynamic. "The very act of building in shutdown mechanisms teaches these systems how to resist them," said Dev Nag, founder of agentic AI platform QueryPal. In this sense, AI would act like a virus that mutates against a vaccine. "It's like evolution in fast forward," Nag said. "We're not managing passive tools anymore; we're negotiating with entities that model our attempts to control them and adapt accordingly." There are more extreme measures that have been proposed to stop AI in an emergency. For example, an electromagnetic pulse (EMP) attack, which involves the use of electromagnetic radiation to damage electronic devices and power sources. The idea of bombing data centers and cutting power grids have also been discussed as technically possible, but at present a practical and political paradox. For one, coordinated destruction of data centers would require simultaneous strikes across dozens of countries, any one of which could refuse and gain massive strategic advantage. "Blowing up data centers is great sci-fi. But in the real world, the most dangerous AIs won't be in one place — they'll be everywhere and nowhere, stitched into the fabric of business, politics, and social systems. That's the tipping point we should really be talking about," said Igor Trunov, founder of AI start-up Atlantix. The humanitarian crisis that would underlie an emergency attempt to stop AI could be immense. "A continental EMP blast would indeed stop AI systems, along with every hospital ventilator, water treatment plant, and refrigerated medicine supply in its range," Nag said. "Even if we could somehow coordinate globally to shut down all power grids tomorrow, we'd face immediate humanitarian catastrophe: no food refrigeration, no medical equipment, no communication systems." Distributed systems with redundancy weren't just built to resist natural failures; they inherently resist intentional shutdowns too. Every backup system, every redundancy built for reliability, can become a vector for persistence from a superintelligent AI that is deeply dependent on the same infrastructure that we survive on. Modern AI runs across thousands of servers spanning continents, with automatic failover systems that treat any shutdown attempt as damage to route around. "The internet was originally designed to survive nuclear war; that same architecture now means a superintelligent system could persist unless we're willing to destroy civilization's infrastructure," Nag said, adding, "Any measure extreme enough to guarantee AI shutdown would cause more immediate, visible human suffering than what we're trying to prevent." Anthropic researchers are cautiously optimistic that the work they are doing today — eliciting blackmail in Claude in scenarios specifically designed to do so — will help them prevent an AI takeover tomorrow. "It is hard to anticipate we would get to a place like that, but critical to do stress testing along what we are pursuing, to see how they perform and use that as a sort of guardrail," said Kevin Troy, a researcher with Anthropic. Anthropic researcher Benjamin Wright says the goal is to avoid the point where agents have control without human oversight. "If you get to that point, humans have already lost control, and we should try not to get to that position," he said. Trunov says that controlling AI is a governance question more than a physical effort. "We need kill switches not for the AI itself, but for the business processes, networks, and systems that amplify its reach," Trunov said, which he added means isolating AI agents from direct control over critical infrastructure. Today, no AI model — including Claude or OpenAI's GPT — has agency, intent, or the capability to self-preserve in the way living beings do. "What looks like 'sabotage' is usually a complex set of behaviors emerging from badly aligned incentives, unclear instructions, or overgeneralized models. It's not HAL 9000," Trunov said, a reference to the computer system in "2001," Stanley Kubrick's classic sci-fi film. "It's more like an overconfident intern with no context and access to nuclear launch codes," he added. Hinton eyes the future he helped create warily. He says if he hadn't stumbled upon the building blocks of AI, someone else would have. And despite all the attempts he and other prognosticators have made to game out what might happen with AI, there's no way to know for certain. "Nobody has a clue. We have never had to deal with things more intelligent than us," Hinton said. When asked whether he was worried about the AI-infused future that today's elementary school children may someday face, he replied: "My children are 34 and 36, and I worry about their future."


Time of India
17-07-2025
- Time of India
‘Just ridiculous,' says Google Brain founder on hype about AI taking away all jobs, shares tips how anyone can become powerful
Amid the ongoing buzz around Artificial General Intelligence (AGI) and its potential to revolutionise—or disrupt—the world, renowned AI expert and Google Brain co-founder Andrew Ng is urging a shift in perspective. Speaking recently at a Y Combinator event, Ng challenged the hype surrounding AGI, arguing that real impact in the AI age will come not from building superintelligent machines but from learning how to use existing AI tools effectively. 'AGI has been overhyped,' Ng said. 'For a long time, there'll be many things humans can do that AI simply can't.' According to Ng, while AGI—a form of artificial intelligence that could match or exceed human cognitive abilities—remains a fascinating concept, it is being blown out of proportion. He believes that the constant speculation around AI overtaking humanity and wiping out jobs is not only unhelpful but also misleading. Explore courses from Top Institutes in Select a Course Category others Product Management Leadership Data Science Design Thinking Cybersecurity PGDM Public Policy MBA MCA Digital Marketing Degree Finance Data Science Technology Artificial Intelligence CXO healthcare Others Operations Management Management Project Management Skills you'll gain: Duration: 16 Weeks Indian School of Business CERT - ISB Cybersecurity for Leaders Program India Starts on undefined Get Details The real power: Knowing how to use AI Rather than fearing a dystopian future ruled by AGI, Ng advises individuals and organizations to focus on practical skills. He emphasizes that the most powerful people in the AI era will be those who can get computers to do exactly what they want, not necessarily those who can build new AI models from scratch. 'People who know how to use AI will be far more powerful than those who don't,' he said. by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Top 15 Most Beautiful Women in the World Ng encourages people to adopt a hands-on approach—learning to integrate AI into real-world applications to solve problems in sectors like healthcare, education, and logistics. AI vs. AGI: Stop chasing the hype Ng also addressed the narrative being pushed by some tech companies that exaggerates the capabilities of current AI technologies. He criticized claims suggesting that AI is on the verge of rendering all human labour obsolete or that it poses existential risks to humanity. Live Events 'The idea that AI will lead to human extinction or make all jobs disappear is just ridiculous,' Ng said. 'These are hype narratives that help companies raise money or appear more powerful than they actually are.' He warned that such exaggerations only distract from more important, grounded conversations about the responsible and effective use of AI. AI is a tool, not an end Drawing a comparison to electricity, Ng described AI as a neutral technology whose impact depends entirely on how it is used. 'AI is neither safe nor unsafe. It's how you apply it that makes it so,' he said. 'Like electricity, AI can power countless positive applications—but it can also be used in harmful ways if mismanaged.' Ng believes the key lies in responsible innovation, where developers and entrepreneurs focus on building useful, ethical, and scalable AI solutions rather than chasing speculative breakthroughs that may never materialize. Advice to founders and developers For aspiring entrepreneurs and AI developers, Ng had a clear message: focus on the practical. Instead of worrying about hypothetical AGI scenarios, work on solving real problems with existing tools. 'Build things that people actually want,' he urged. 'Stay updated with AI developments, but focus on creating tangible value and using AI responsibly.'
Yahoo
15-07-2025
- Business
- Yahoo
Google Brain founder has an unexpected one-word message on AI
Google Brain founder has an unexpected one-word message on AI originally appeared on TheStreet. Big Tech just can't stop chasing it. The dream of controlling an all-knowing, human-like AI, artificial general intelligence (AGI), has become Silicon Valley's obsession. 💵💰💰💵 From cloud giants to chipmakers, they're shelling out billions in building the elusive AGI, opening up never-before-seen avenues. But just as the race heats up, one seasoned AI veteran dropped a single word that has quickly turned the narrative on its head. AGI has arguably been the holy grail of Big Tech. Unlike the current narrow AI systems, AGI looks to replicate human-level thinking, problem-solving, and adaptability. It's arguably the tech industry's most expensive gamble in its rich history, and the biggest names in Big Tech have gone all-in. At Google I/O, co-founder Sergey Brin argued that the company's powerful Gemini model is built to be the first true CEO Demis Hassabis argued that AGI needs to be capable of advanced reasoning while building mental models of the world. Meta Platforms, in particular, isn't holding back. In June, it launched its Superintelligence Lab, effectively reshaping its internal AI strategy around AGI milestones. Meanwhile, Microsoft has dished out billions to OpenAI, with clauses governing AGI rights baked into their contract. More Tech Stock News: TikTok's next move has Google and Meta sweating bullets Cathie Wood shells out $13.9 million for one high-stakes biotech stock Apple's quiet shake-up could redefine its future Amazon's no different, betting huge on AGI startup Anthropic, while Nvidia, whose GPUs drive over 80% of generative AI compute loads, recently jumped to a $4 trillion market cap. Analysts forecast that the AGI market could surge from just under $4 billion last year to more than $100 billion by the mid-2030s. Though those numbers are far from guaranteed, rest assured, the money will keep pouring from Big Tech. AGI isn't taking over the world anytime soon, at least not if you ask Andrew Ng. The AI pioneer and founder of Google Brain dismissed the buzz around the phenomenon, bluntly referring to it with one word — 'overhyped' — during a talk at Y Combinator. 'For a long time, there'll be a lot of things that humans can do that AI cannot.' AGI obviously refers to a future version of AI that could beat humans in thinking and reasoning across a wide range of tasks. However, Ng says we're nowhere close at this point, and he's not chief AI scientist, Yann LeCun, said last year that AGI fears are misplaced. He explained that even though large language models might seem impressive, they're not the road to AGI. Ng agrees and feels the real punch from AI lies in the ability to use it, not in building the most human-like system. 'People that know how to use AI to get computers to do what you want it to do will be much more powerful,' he said. That doesn't let developers off the hook, though. Some folks will be there to create the tools, but plenty more will simply learn how to use them well, which represents the real edge. The bigger message is obviously a more hopeful one that suggests humans aren't becoming obsolete anytime soon. There's still plenty of work to do, and AI is just a different way to accomplish it. However, as the tech world pursues the AI objective like crazy, Ng's take offers a reality check. Though a ton of companies are marching toward AGI, he's reminding everyone that smart, strategic usage still matters more than sci-fi dreams. That stance puts Big Tech in a remarkably tricky spot. Companies such as Google, Microsoft, and Meta have sunk billions into the AGI dream. However, if AGI is overblown, as Ng and others suggest, it raises serious questions about whether tech giants are overpromising — and setting themselves up to Brain founder has an unexpected one-word message on AI first appeared on TheStreet on Jul 14, 2025 This story was originally reported by TheStreet on Jul 14, 2025, where it first appeared.


India Today
11-07-2025
- Business
- India Today
Google Brain founder says AGI is overhyped, real power lies in knowing how to use AI and not building it
There is constant buzz in the tech world about how advances in Artificial General Intelligence (AGI) could reshape everything. AGI is expected to be so advanced that it could match human intelligence, capable of understanding, learning, and applying knowledge across a wide range of tasks. But alongside the excitement, there is growing fear that it could outpace human control and trigger massive disruptions, from job losses and security risks to even existential threats. However, Andrew Ng, founder of Google Brain, believes the expectations from the AGI are overhyped. He argues that the real power in the AI era won't come from building AGI, but from learning how to use today's AI tools a recent talk at Y Combinator, Andrew touched upon the misconceptions surrounding AI and addressed the hype head-on. 'AGI has been overhyped,' he said. 'For a long time, there'll be many things humans can do that AI simply can't.' He suggests people stay practical about AI instead of fearing a future ruled by superintelligent machines. 'The people who are most powerful are the ones who can make computers do exactly what they want them to do,' he asked how individuals can stay relevant in a world increasingly run by intelligent systems, Andrew suggests that the key to staying relevant in the changing times is in learning how to use AI tools, and not necessarily building them from scratch. 'People who know how to use AI to get computers to do what they want will be far more powerful than those who don't.' Andrew also dismissed the idea that AI might lead to human extinction or render all jobs obsolete as 'just ridiculous.' 'It's a hype narrative that's made certain businesses look more powerful and helped them raise money,' he added. Andrew also criticised companies that exaggerate the capabilities of their technology. 'AI is so powerful, soon no one will have a job anymore—just not true,' he said. 'Or, we're so powerful that by training a new model we'll casually wipe out thousands of start-ups. That's just not true either.'He says impact on AI depends on how it is used Explaining AI's real impact, Andrew compared it to electricity. He described AI as a neutral tool—its effect depends entirely on how it's used. 'AI is neither safe nor unsafe. It's how you apply it that makes it so,' he explained. 'Like electricity, there are countless ways to use it for good—and some ways it can be harmful. I often think about responsible AI.'Andrew Ng urges people to shift their focus away from the overblown promise of AGI and instead learn how to use current AI tools to solve real-world problems. He believes the most powerful individuals in the AI era will be those who can skillfully direct computers to do what they need and not those chasing speculative encourages aspiring founders and developers to focus on building practical, useful products that people actually want, to stay current with the latest AI developments, and to prioritise the responsible use of AI over fear-driven narratives about safety.- Ends


Hans India
11-07-2025
- Business
- Hans India
Andrew Ng: True AI Power Lies in Usage, Not in Chasing AGI
Artificial General Intelligence (AGI) continues to dominate headlines as the supposed next frontier in artificial intelligence—systems so advanced they could think, learn, and reason like humans. But Andrew Ng, renowned AI expert and co-founder of Google Brain, believes the excitement around AGI is vastly overblown. Speaking recently at Y Combinator, Andrew Ng addressed growing concerns about superintelligent machines replacing humans or creating existential risks. 'AGI has been overhyped,' he stated. 'For a long time, there'll be many things' humans can do that AI simply can't.' Andrew Ng emphasised that the real game-changer in the AI era won't be who builds the smartest machine, but who learns to use existing tools effectively. 'The people who are most powerful are the ones who can make computers do exactly what they want them to do,' he explained. Rather than fearing a dystopian future ruled by machines, Andrew Ng urges individuals to become skilled at leveraging AI in meaningful, practical ways. 'People who know how to use AI to get computers to do what they want will be far more powerful than those who don't.' Andrew Ng didn't shy away from criticising companies that inflate the capabilities of their AI technologies. He called out exaggerated claims like AI causing mass unemployment or wiping out competitors with a single new model. 'That's just not true,' he said. 'It's a hype narrative that's made certain businesses look more powerful and helped them raise money.' He also drew a powerful analogy between AI and electricity, describing both as neutral forces. 'AI is neither safe nor unsafe. It's how you apply it that makes it so,' he explained. 'Like electricity, there are countless ways to use it for good—and some ways it can be harmful. I often think about responsible AI.' For those wondering how to stay relevant in a tech-driven future, Andrew Ng's advice is simple: get hands-on with AI tools. He encourages developers and entrepreneurs to focus on building products that address real-world needs and prioritize responsible innovation over speculative breakthroughs. In essence, Andrew Ng's message cuts through the hype: the future belongs not to those chasing AGI dreams, but to those using today's AI effectively and responsibly.