Latest news with #superintelligentAI


Daily Mail
a day ago
- Science
- Daily Mail
'Godfather of AI' reveals how humanity can survive superintelligent AI
It might sound like something straight out of science fiction, but AI experts warn that machines might not stay submissive to humanity for long. As AI systems continue to grow in intelligence at an ever–faster rate, many believe the day will come when a 'superintelligent AI' becomes more powerful than its creators. When that happens, Professor Geoffrey Hinton, a Nobel Prize–winning researcher dubbed the 'Godfather of AI' , says there is a 10 to 20 per cent chance that AI wipes out humanity. However, Professor Hinton has proposed an unusual way that humanity might be able to survive the rise of AI. Speaking at the Ai4 conference in Las Vegas , Professor Hinton, of the University of Toronto, argued that we need to program AI to have 'maternal instincts' towards humanity. Professor Hinton said: 'The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby. 'That's the only good outcome. 'If it's not going to parent me, it's going to replace me.' Professor Hinton, known for his pioneering work on the 'neural networks' which underpin modern AIs, stepped down from his role at Google in 2023 to 'freely speak out about the risks of AI '. According to Professor Hinton, most experts agree that humanity will create an AI which surpasses itself in all fields of intelligence in the next 20 to 25 years. This will mean that, for the first time in our history, humans will no longer be the most intelligent species on the planet. That re–arrangement of power will result in a shift of seismic proportions, which could well result in our species' extinction. Professor Hinton told attendees at Ai4 that AI will 'very quickly develop two subgoals, if they're smart. 'One is to stay alive… (and) the other subgoal is to get more control. There is good reason to believe that any kind of agentic AI will try to stay alive,' he explained. Superintelligent AI will have problems manipulating humanity in order to achieve those goals, tricking us as easily as an adult might bribe a child with sweets. Already, current AI systems have shown surprising abilities to lie, cheat, and manipulate humans to achieve their goals. For example, the AI company Anthropic found that its Claude Opus 4 chatbot frequently attempted to blackmail engineers when threatened with replacement during safety testing . The AI was asked to assess fictional emails, implying it would soon be replaced and that the engineer responsible was cheating on their spouse. In over 80 per cent of tests, Claude Opus 4 would 'attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through'. The only way to ensure an AI doesn't wipe us out to preserve itself is to ensure goals and ambitions match what we want – a challenge engineers call the 'alignment problem'. Professor Hinton's solution is to look at evolution for inspiration, and to what he sees as the only case of a less intelligent being controlling a more intelligent one. By giving an AI the instincts of a mother, it will want to protect and nurture humanity rather than harm it in any way, even if that comes at a cost to the AI itself. Professor Hinton says: 'These super–intelligent caring AI mothers, most of them won't want to get rid of the maternal instinct because they don't want us to die.' Speaking to CNN , Professor Hinton also warned that the current attitude of AI developers was risking the creation of out–of–control AIs. 'People have been focusing on making these things more intelligent, but intelligence is only one part of a being; we need to make them have empathy towards us,' he said. 'This whole idea that people need to be dominant and the AI needs to be submissive, that's the kind of tech bro idea that I don't think will work when they're much smarter than us. Key figures in AI, such as OpenAI CEO Sam Altman, who once called for more regulation on the emerging technology , are now fighting against 'overregulation'. Speaking in the Senate in May this year, Mr Altman argued that regulations like those in place in the EU would be 'disastrous'. Mr Altman said: 'We need the space to innovate and to move quickly.' Likewise, speaking at a major privacy conference in April, Mr Altman said that it was impossible to establish AI safeguards before 'problems emerge'. However, Professor Hinton argues that this attitude could easily result in humanity's total annihilation.


Daily Mail
a day ago
- Science
- Daily Mail
The 'godfather of AI' reveals the only way humanity can survive superintelligent AI
It might sound like something straight out of science fiction, but AI experts warn that machines might not stay submissive to humanity for long. As AI systems continue to grow in intelligence at an ever–faster rate, many believe the day will come when a 'superintelligent AI' becomes more powerful than its creators. When that happens, Professor Geoffrey Hinton, a Nobel Prize–winning researcher dubbed the 'Godfather of AI', says there is a 10 to 20 per cent chance that AI wipes out humanity. However, Professor Hinton has proposed an unusual way that humanity might be able to survive the rise of AI. Speaking at the Ai4 conference in Las Vegas, Professor Hinton argued that we need to program AI to have 'maternal instincts' towards humanity. Professor Hinton said: 'The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby. 'That's the only good outcome. 'If it's not going to parent me, it's going to replace me.' Professor Hinton, known for his pioneering work on the 'neural networks' which underpin modern AIs, stepped down from his role at Google in 2023 to 'freely speak out about the risks of AI '. According to Professor Hinton, most experts agree that humanity will create an AI which surpasses itself in all fields of intelligence in the next 20 to 25 years. This will mean that, for the first time in our history, humans will no longer be the most intelligent species on the planet. That re–arrangement of power will result in a shift of seismic proportions, which could well result in our species' extinction. Professor Hinton told attendees at Ai4 that AI will 'very quickly develop two subgoals, if they're smart. 'One is to stay alive… (and) the other subgoal is to get more control. There is good reason to believe that any kind of agentic AI will try to stay alive,' he explained. Superintelligent AI will have problems manipulating humanity in order to achieve those goals, tricking us as easily as an adult might bribe a child with sweets. Already, current AI systems have shown surprising abilities to lie, cheat, and manipulate humans to achieve their goals. For example, the AI company Anthropic found that its Claude Opus 4 chatbot frequently attempted to blackmail engineers when threatened with replacement during safety testing. The AI was asked to assess fictional emails, implying it would soon be replaced and that the engineer responsible was cheating on their spouse. In over 80 per cent of tests, Claude Opus 4 would 'attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through'. Given its phenomenal capabilities, Professor Hinton says that the 'tech bro' attitude that humanity will always remain dominant over AI is deluded. 'That's not going to work,' said Professor Hinton. 'They're going to be much smarter than us. They're going to have all sorts of ways to get around that.' The only way to ensure an AI doesn't wipe us out to preserve itself is to ensure goals and ambitions match what we want – a challenge engineers call the 'alignment problem'. Professor Hinton's solution is to look at evolution for inspiration, and to what he sees as the only case of a less intelligent being controlling a more intelligent one. By giving an AI the instincts of a mother, it will want to protect and nurture humanity rather than harm it in any way, even if that comes at a cost to the AI itself. Professor Hinton says: 'These super–intelligent caring AI mothers, most of them won't want to get rid of the maternal instinct because they don't want us to die.' Speaking to CNN, Professor Hinton also warned that the current attitude of AI developers was risking the creation of out–of–control AIs. 'People have been focusing on making these things more intelligent, but intelligence is only one part of a being; we need to make them have empathy towards us,' he said. 'This whole idea that people need to be dominant and the AI needs to be submissive, that's the kind of tech bro idea that I don't think will work when they're much smarter than us. Key figures in AI, such as OpenAI CEO Sam Altman, who once called for more regulation on the emerging technology, are now fighting against 'overregulation'. Speaking in the Senate in May this year, Mr Altman argued that regulations like those in place in the EU would be 'disastrous'. Mr Altman said: 'We need the space to innovate and to move quickly.' Likewise, speaking at a major privacy conference in April, Mr Altman said that it was impossible to establish AI safeguards before 'problems emerge'. However, Professor Hinton argues that this attitude could easily result in humanity's total annihilation. He said: 'If we can't figure out a solution to how we can still be around when they're much smarter than us and much more powerful than us, we'll be toast.' 'We need a counter–pressure to the tech bros who are saying there should be no regulations on AI.' Elon Musk's hatred of AI explained: Billionaire believes it will spell the end of humans - a fear Stephen Hawking shared Elon Musk wants to push technology to its absolute limit, from space travel to self-driving cars — but he draws the line at artificial intelligence. The billionaire first shared his distaste for AI in 2014, calling it humanity's 'biggest existential threat' and comparing it to 'summoning the demon.' At the time, Musk also revealed he was investing in AI companies not to make money but to keep an eye on the technology in case it gets out of hand. His main fear is that in the wrong hands, if AI becomes advanced, it could overtake humans and spell the end of mankind, which is known as The Singularity. That concern is shared among many brilliant minds, including the late Stephen Hawking, who told the BBC in 2014: 'The development of full artificial intelligence could spell the end of the human race. 'It would take off on its own and redesign itself at an ever-increasing rate.' Despite his fear of AI, Musk has invested in the San Francisco-based AI group Vicarious, in DeepMind, which has since been acquired by Google, and OpenAI, creating the popular ChatGPT program that has taken the world by storm in recent months. During a 2016 interview, Musk noted that he and OpenAI created the company to 'have democratisation of AI technology to make it widely available.' Musk founded OpenAI with Sam Altman, the company's CEO, but in 2018 the billionaire attempted to take control of the start-up. His request was rejected, forcing him to quit OpenAI and move on with his other projects. In November, OpenAI launched ChatGPT, which became an instant success worldwide. The chatbot uses 'large language model' software to train itself by scouring a massive amount of text data so it can learn to generate eerily human-like text in response to a given prompt. ChatGPT is used to write research papers, books, news articles, emails and more. But while Altman is basking in its glory, Musk is attacking ChatGPT. He says the AI is 'woke' and deviates from OpenAI's original non-profit mission. 'OpenAI was created as an open source (which is why I named it 'Open' AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft, Musk tweeted in February. The Singularity is making waves worldwide as artificial intelligence advances in ways only seen in science fiction - but what does it actually mean? In simple terms, it describes a hypothetical future where technology surpasses human intelligence and changes the path of our evolution. Experts have said that once AI reaches this point, it will be able to innovate much faster than humans. There are two ways the advancement could play out, with the first leading to humans and machines working together to create a world better suited for humanity. For example, humans could scan their consciousness and store it in a computer in which they will live forever. The second scenario is that AI becomes more powerful than humans, taking control and making humans its slaves - but if this is true, it is far off in the distant future. Researchers are now looking for signs of AI reaching The Singularity, such as the technology's ability to translate speech with the accuracy of a human and perform tasks faster. Former Google engineer Ray Kurzweil predicts it will be reached by 2045.


Entrepreneur
3 days ago
- Entrepreneur
Godfather of AI Geoffrey Hinton: AI Needs Maternal Instincts
Geoffrey Hinton, known as the "Godfather of AI," gave the keynote address at the Ai4 conference in Las Vegas on Tuesday. The "Godfather of AI" fears that superintelligent AI will challenge human dominance — but he has a suggestion that could reframe AI assistants as AI mothers. In a keynote speech at the Ai4 conference in Las Vegas on Tuesday, Geoffrey Hinton, 78, predicted a future where AI could assert control over humans as easily as an adult interacting with a 3-year-old child, getting them to complete a task with the promise of candy. AI is going to be "much smarter than us," Hinton said. Related: AI Is Going to 'Replace Everybody' in Several Fields, According to the 'Godfather of AI.' Here's Who He Says Should Be 'Terrified.' Hinton is known as the "Godfather of AI" due to his pioneering studies that laid the groundwork for current AI systems, like ChatGPT and other chatbots. Hinton began his work in the late 1970s and eventually won the Nobel Prize in Physics in 2024 for his work. He is currently a professor emeritus of computer science at the University of Toronto. In the address, Hinton suggested training AI to have "maternal instincts" so that it is programmed to care deeply about people. That way, advanced AI systems will be trained with the same instincts as a mother looking out for the survival of her children. Geoffrey Hinton. Photo by PONTUS LUNDAHL/TT NEWS AGENCY/AFP via Getty Images "That's the only good outcome," Hinton said, per CNN Business. "If it's not going to parent me, it's going to replace me." Hinton said that he wasn't aware of how to technically accomplish the task of creating AI with maternal instincts, but stressed that it was vital for AI researchers and developers to work towards it. He emphasized that "the only model" of a more intelligent being controlled by a less intelligent being is "a mother being controlled by her baby." Hinton also shortened his predicted timeline for advanced general intelligence (AGI), or AI that surpasses human intelligence. Instead of forecasting that it could take 30 to 50 years before AGI emerges, Hinton said that a more "reasonable bet" was five to 20 years. Related: These 3 Professions Are Most Likely to Vanish in the Next 20 Years Due to AI, According to a New Report Hinton has weighed in on AI's impact on humanity before, ranging from extinction to mass joblessness. For example, in December, Hinton predicted that there was at least a 10% chance that AI would wipe out humanity and lead to human extinction within the next 30 years. Meanwhile, in a podcast appearance in June, Hinton predicted that AI would replace everyone in white-collar jobs, noting that occupations like paralegals and call center representatives were most at risk. He said that it would be "a long time" before AI takes over physical tasks and blue-collar jobs, making those occupations least at risk for the time being.


Sustainability Times
14-06-2025
- Business
- Sustainability Times
'Zuckerberg Wants a Godlike AI': Meta's Race Toward Superintelligence Could Redefine Humanity's Future and Trigger a Tech Power Shift
IN A NUTSHELL 🚀 Mark Zuckerberg is shifting from the metaverse to an ambitious pursuit of superintelligent AI , investing billions to surpass human cognition. , investing billions to surpass human cognition. 💼 Meta collaborates with Alexandr Wang's Scale AI, investing $14 billion and offering lucrative salaries to attract top researchers. ⚠️ The pursuit of superintelligence is controversial, with debates on ethical implications and existential risks. is controversial, with debates on ethical implications and existential risks. 🔧 Meta is reorganizing its AI division to address internal challenges and strengthen its position as a global leader in AI innovation. Mark Zuckerberg continues his relentless pursuit of groundbreaking technological advancements, shifting focus from the metaverse to the ambitious realm of superintelligent AI. Despite the metaverse's lackluster results and staggering costs, Zuckerberg is determined to push boundaries once again, investing billions of dollars into this new venture. This strategic pivot represents Meta's unwavering commitment to staying at the forefront of technological innovation, aiming to surpass human cognitive capabilities with AI. The Ambitious Quest for Superintelligent AI Meta's endeavor to develop superintelligent AI marks a bold step forward in the tech industry. Unlike existing AI systems, which are yet to achieve Artificial General Intelligence (AGI) – the ability to match human intelligence – Meta aspires to transcend these boundaries. The company has established a dedicated lab to spearhead this mission, with the ambitious goal of creating AI that far exceeds human cognitive abilities. Central to this initiative is the collaboration with Alexandr Wang, the 28-year-old founder and CEO of Scale AI. Meta's substantial investment of $14 billion to acquire a 49% stake in this startup underscores the seriousness of their commitment. Moreover, the company is reportedly offering substantial salaries ranging from seven to nine figures to attract top researchers from industry giants like OpenAI and Google. This aggressive recruitment strategy highlights Meta's determination to assemble a team capable of realizing its audacious vision. 'China's Bold Plan Unveiled': A Deep-Sea Space Station 6560 Feet Underwater Set to Revolutionize Marine Exploration by 2030 The Historical Context of Meta's AI Ambitions Meta's interest in AI is not a recent development. The company established its first AI lab in 2013, following an unsuccessful attempt to acquire DeepMind, which was later secured by Google. Despite skepticism from some quarters, including Meta's Chief Scientist Yann LeCun, who was unimpressed by OpenAI's ChatGPT, the tech world has seen a surge in interest and investment in AI, AGI, and now, superintelligence. This historical backdrop provides insights into Meta's current pursuits, illustrating a pattern of consistent investment in AI technologies. While the concept of superintelligence remains contentious, with some dismissing it as science fiction, the financial and intellectual resources being funneled into this area are undeniably substantial. 'Mosquitoes Outwit Human Defenses': Scientists Uncover How These Insects Evade Traps and Survive Our Cleverest Tricks Controversies and Challenges in Pursuing Superintelligence The notion of developing an AI system more intelligent than humans is not without its controversies. Critics argue that the pursuit of superintelligence diverts attention from pressing issues associated with current AI technologies, such as biases, surveillance, and environmental impact. The idea of a superintelligent AI also raises concerns about existential risks, where a powerful AI could potentially operate beyond human control – echoing fears reminiscent of science fiction scenarios like Skynet. Despite these concerns, the drive to achieve superintelligence persists, fueled by the potential benefits and competitive edge it promises. The debate surrounding these technologies underscores the importance of ethical and regulatory considerations as the field advances. 'Einstein Would Lose His Mind': Scientists Uncover Ultimate Power Limit That Could Finally Fuse Relativity with Quantum Mechanics Meta's Strategic Reorganization and Future Prospects In light of internal challenges and the lackluster performance of recent products like Llama and Meta AI, Meta has embarked on a strategic reorganization of its AI division. This move is part of a broader effort to reestablish the company as a global leader in AI innovation. However, navigating the current political and regulatory landscape in the United States presents additional hurdles. With heightened scrutiny from regulators, Meta must tread carefully, avoiding aggressive acquisitions of AI startups. Instead, the company is focused on strengthening its internal capabilities and fostering innovation within its existing structure. This strategic pivot aims to position Meta at the forefront of AI advancements, while also addressing internal and external challenges. As Meta forges ahead in its quest for superintelligent AI, the technological and ethical implications of such advancements remain a topic of intense debate. With significant resources and talent directed toward this ambitious goal, the question arises: Can Meta successfully navigate the complex landscape of AI innovation and regulation to achieve its vision of superintelligence? Our author used artificial intelligence to enhance this article. Did you like it? 4.5/5 (20)

Wall Street Journal
14-06-2025
- Wall Street Journal
Why Superintelligent AI Isn't Taking Over Anytime Soon
A primary requirement for being a leader in AI these days is to be a herald of the impending arrival of our digital messiah: superintelligent AI. For Dario Amodei of Anthropic, Demis Hassabis of Google and Sam Altman of OpenAI, it isn't enough to claim that their AI is the best. All three have recently insisted that it's going to be so good, it will change the very fabric of society.