Latest news with #Ai4Conference


Daily Mail
2 days ago
- Science
- Daily Mail
'Godfather of AI' reveals how humanity can survive superintelligent AI
It might sound like something straight out of science fiction, but AI experts warn that machines might not stay submissive to humanity for long. As AI systems continue to grow in intelligence at an ever–faster rate, many believe the day will come when a 'superintelligent AI' becomes more powerful than its creators. When that happens, Professor Geoffrey Hinton, a Nobel Prize–winning researcher dubbed the 'Godfather of AI' , says there is a 10 to 20 per cent chance that AI wipes out humanity. However, Professor Hinton has proposed an unusual way that humanity might be able to survive the rise of AI. Speaking at the Ai4 conference in Las Vegas , Professor Hinton, of the University of Toronto, argued that we need to program AI to have 'maternal instincts' towards humanity. Professor Hinton said: 'The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby. 'That's the only good outcome. 'If it's not going to parent me, it's going to replace me.' Professor Hinton, known for his pioneering work on the 'neural networks' which underpin modern AIs, stepped down from his role at Google in 2023 to 'freely speak out about the risks of AI '. According to Professor Hinton, most experts agree that humanity will create an AI which surpasses itself in all fields of intelligence in the next 20 to 25 years. This will mean that, for the first time in our history, humans will no longer be the most intelligent species on the planet. That re–arrangement of power will result in a shift of seismic proportions, which could well result in our species' extinction. Professor Hinton told attendees at Ai4 that AI will 'very quickly develop two subgoals, if they're smart. 'One is to stay alive… (and) the other subgoal is to get more control. There is good reason to believe that any kind of agentic AI will try to stay alive,' he explained. Superintelligent AI will have problems manipulating humanity in order to achieve those goals, tricking us as easily as an adult might bribe a child with sweets. Already, current AI systems have shown surprising abilities to lie, cheat, and manipulate humans to achieve their goals. For example, the AI company Anthropic found that its Claude Opus 4 chatbot frequently attempted to blackmail engineers when threatened with replacement during safety testing . The AI was asked to assess fictional emails, implying it would soon be replaced and that the engineer responsible was cheating on their spouse. In over 80 per cent of tests, Claude Opus 4 would 'attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through'. The only way to ensure an AI doesn't wipe us out to preserve itself is to ensure goals and ambitions match what we want – a challenge engineers call the 'alignment problem'. Professor Hinton's solution is to look at evolution for inspiration, and to what he sees as the only case of a less intelligent being controlling a more intelligent one. By giving an AI the instincts of a mother, it will want to protect and nurture humanity rather than harm it in any way, even if that comes at a cost to the AI itself. Professor Hinton says: 'These super–intelligent caring AI mothers, most of them won't want to get rid of the maternal instinct because they don't want us to die.' Speaking to CNN , Professor Hinton also warned that the current attitude of AI developers was risking the creation of out–of–control AIs. 'People have been focusing on making these things more intelligent, but intelligence is only one part of a being; we need to make them have empathy towards us,' he said. 'This whole idea that people need to be dominant and the AI needs to be submissive, that's the kind of tech bro idea that I don't think will work when they're much smarter than us. Key figures in AI, such as OpenAI CEO Sam Altman, who once called for more regulation on the emerging technology , are now fighting against 'overregulation'. Speaking in the Senate in May this year, Mr Altman argued that regulations like those in place in the EU would be 'disastrous'. Mr Altman said: 'We need the space to innovate and to move quickly.' Likewise, speaking at a major privacy conference in April, Mr Altman said that it was impossible to establish AI safeguards before 'problems emerge'. However, Professor Hinton argues that this attitude could easily result in humanity's total annihilation.


Daily Mail
2 days ago
- Science
- Daily Mail
The 'godfather of AI' reveals the only way humanity can survive superintelligent AI
It might sound like something straight out of science fiction, but AI experts warn that machines might not stay submissive to humanity for long. As AI systems continue to grow in intelligence at an ever–faster rate, many believe the day will come when a 'superintelligent AI' becomes more powerful than its creators. When that happens, Professor Geoffrey Hinton, a Nobel Prize–winning researcher dubbed the 'Godfather of AI', says there is a 10 to 20 per cent chance that AI wipes out humanity. However, Professor Hinton has proposed an unusual way that humanity might be able to survive the rise of AI. Speaking at the Ai4 conference in Las Vegas, Professor Hinton argued that we need to program AI to have 'maternal instincts' towards humanity. Professor Hinton said: 'The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby. 'That's the only good outcome. 'If it's not going to parent me, it's going to replace me.' Professor Hinton, known for his pioneering work on the 'neural networks' which underpin modern AIs, stepped down from his role at Google in 2023 to 'freely speak out about the risks of AI '. According to Professor Hinton, most experts agree that humanity will create an AI which surpasses itself in all fields of intelligence in the next 20 to 25 years. This will mean that, for the first time in our history, humans will no longer be the most intelligent species on the planet. That re–arrangement of power will result in a shift of seismic proportions, which could well result in our species' extinction. Professor Hinton told attendees at Ai4 that AI will 'very quickly develop two subgoals, if they're smart. 'One is to stay alive… (and) the other subgoal is to get more control. There is good reason to believe that any kind of agentic AI will try to stay alive,' he explained. Superintelligent AI will have problems manipulating humanity in order to achieve those goals, tricking us as easily as an adult might bribe a child with sweets. Already, current AI systems have shown surprising abilities to lie, cheat, and manipulate humans to achieve their goals. For example, the AI company Anthropic found that its Claude Opus 4 chatbot frequently attempted to blackmail engineers when threatened with replacement during safety testing. The AI was asked to assess fictional emails, implying it would soon be replaced and that the engineer responsible was cheating on their spouse. In over 80 per cent of tests, Claude Opus 4 would 'attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through'. Given its phenomenal capabilities, Professor Hinton says that the 'tech bro' attitude that humanity will always remain dominant over AI is deluded. 'That's not going to work,' said Professor Hinton. 'They're going to be much smarter than us. They're going to have all sorts of ways to get around that.' The only way to ensure an AI doesn't wipe us out to preserve itself is to ensure goals and ambitions match what we want – a challenge engineers call the 'alignment problem'. Professor Hinton's solution is to look at evolution for inspiration, and to what he sees as the only case of a less intelligent being controlling a more intelligent one. By giving an AI the instincts of a mother, it will want to protect and nurture humanity rather than harm it in any way, even if that comes at a cost to the AI itself. Professor Hinton says: 'These super–intelligent caring AI mothers, most of them won't want to get rid of the maternal instinct because they don't want us to die.' Speaking to CNN, Professor Hinton also warned that the current attitude of AI developers was risking the creation of out–of–control AIs. 'People have been focusing on making these things more intelligent, but intelligence is only one part of a being; we need to make them have empathy towards us,' he said. 'This whole idea that people need to be dominant and the AI needs to be submissive, that's the kind of tech bro idea that I don't think will work when they're much smarter than us. Key figures in AI, such as OpenAI CEO Sam Altman, who once called for more regulation on the emerging technology, are now fighting against 'overregulation'. Speaking in the Senate in May this year, Mr Altman argued that regulations like those in place in the EU would be 'disastrous'. Mr Altman said: 'We need the space to innovate and to move quickly.' Likewise, speaking at a major privacy conference in April, Mr Altman said that it was impossible to establish AI safeguards before 'problems emerge'. However, Professor Hinton argues that this attitude could easily result in humanity's total annihilation. He said: 'If we can't figure out a solution to how we can still be around when they're much smarter than us and much more powerful than us, we'll be toast.' 'We need a counter–pressure to the tech bros who are saying there should be no regulations on AI.' Elon Musk's hatred of AI explained: Billionaire believes it will spell the end of humans - a fear Stephen Hawking shared Elon Musk wants to push technology to its absolute limit, from space travel to self-driving cars — but he draws the line at artificial intelligence. The billionaire first shared his distaste for AI in 2014, calling it humanity's 'biggest existential threat' and comparing it to 'summoning the demon.' At the time, Musk also revealed he was investing in AI companies not to make money but to keep an eye on the technology in case it gets out of hand. His main fear is that in the wrong hands, if AI becomes advanced, it could overtake humans and spell the end of mankind, which is known as The Singularity. That concern is shared among many brilliant minds, including the late Stephen Hawking, who told the BBC in 2014: 'The development of full artificial intelligence could spell the end of the human race. 'It would take off on its own and redesign itself at an ever-increasing rate.' Despite his fear of AI, Musk has invested in the San Francisco-based AI group Vicarious, in DeepMind, which has since been acquired by Google, and OpenAI, creating the popular ChatGPT program that has taken the world by storm in recent months. During a 2016 interview, Musk noted that he and OpenAI created the company to 'have democratisation of AI technology to make it widely available.' Musk founded OpenAI with Sam Altman, the company's CEO, but in 2018 the billionaire attempted to take control of the start-up. His request was rejected, forcing him to quit OpenAI and move on with his other projects. In November, OpenAI launched ChatGPT, which became an instant success worldwide. The chatbot uses 'large language model' software to train itself by scouring a massive amount of text data so it can learn to generate eerily human-like text in response to a given prompt. ChatGPT is used to write research papers, books, news articles, emails and more. But while Altman is basking in its glory, Musk is attacking ChatGPT. He says the AI is 'woke' and deviates from OpenAI's original non-profit mission. 'OpenAI was created as an open source (which is why I named it 'Open' AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft, Musk tweeted in February. The Singularity is making waves worldwide as artificial intelligence advances in ways only seen in science fiction - but what does it actually mean? In simple terms, it describes a hypothetical future where technology surpasses human intelligence and changes the path of our evolution. Experts have said that once AI reaches this point, it will be able to innovate much faster than humans. There are two ways the advancement could play out, with the first leading to humans and machines working together to create a world better suited for humanity. For example, humans could scan their consciousness and store it in a computer in which they will live forever. The second scenario is that AI becomes more powerful than humans, taking control and making humans its slaves - but if this is true, it is far off in the distant future. Researchers are now looking for signs of AI reaching The Singularity, such as the technology's ability to translate speech with the accuracy of a human and perform tasks faster. Former Google engineer Ray Kurzweil predicts it will be reached by 2045.


Forbes
5 days ago
- Forbes
Geoff Hinton Warns Humanity's Future May Depend On AI ‘Motherly Instincts'
Geoff Hinton and Shirin Ghaffary speak at Ai4 2025 Conference in Las Vegas Ron Schmelzer Speaking at the recent Ai4 Conference in Las Vegas, Geoff Hinton, one of the most influential voices in artificial intelligence, warned that humanity is running out of time to prepare for machines that outthink us. He now believes artificial general intelligence, or AGI, could be here within a decade. Shirin Ghaffary of Bloomberg News opened their conversation with a light jab about a robots-versus-human boxing match staged before the session. The human won handily, 'for now,' she joked. Hinton grinned at the banter, but his tone shifted once talk turned to the central question of his later career: when will AI surpass the human mind? 'Most experts think sometime between five and twenty years,' he said. His own forecast has tightened sharply. 'I used to say thirty to fifty years. Now, it could be more than twenty years, or just a few years.' Hinton isn't picturing minor upgrades. He's thinking of systems far more capable than any person alive and he doubts we can control them once they arrive. In much of the tech world, the future of AI is framed as a contest for control: humans must keep the upper hand. Hinton calls that a false hope. 'They're going to be much smarter than us,' he said. 'Imagine you were in charge of a playground of three-year-olds and you worked for them. It wouldn't be very hard for them to get around you if they were smarter.' His solution turns the usual script upside down. Instead of fighting to stay in charge, he believes we should design AI to care about us. The analogy he uses is a mother and her child. The stronger being naturally committed to the weaker one's survival. 'We need AI mothers rather than AI assistants. An assistant is someone you can fire. You can't fire your mother, thankfully.' That means building 'maternal instincts' into advanced systems, a kind of embedded drive to protect human life. Hinton admits he doesn't know how to engineer it yet, but he insists it's a research priority as important as improving raw intelligence. He emphasized that this is a different kind of research, not about making systems smarter, but about making them care. He also sees this as one of the few areas where countries might truly work together, since no nation wants to be ruled by its machines. Hinton doesn't expect collaboration to stretch far. The AI race, especially between the U.S. and China, is accelerating, and neither side is likely to slow down. He does believe there's a chance for agreement on curbing risky biotech applications, such as synthetic virus creation, and on exploring ways humans could coexist with more powerful systems. Part of his conviction that dominance won't work comes from how AI is built. Digital models can share what they've learned instantly with thousands of copies. 'If people could do that in a university, you'd take one course, your friends would take different courses, and you'd all know everything,' he said. 'We can share just a few bits a second. AI can share a trillion bits every time they update.' This ability to learn collectively means AI could outpace human progress by orders of magnitude. Coupled with vast sums of investment, Hinton doubts the climb toward superintelligence can be stopped. When asked if rules could head off the worst risks, Hinton was direct. 'If the regulation says don't develop AI, that's not going to happen.' He supports specific safety measures, especially those aimed at blocking small groups from producing dangerous biological agents, but sees sweeping pauses as unrealistic. His frustration with U.S. politics is clear. Even simple proposals, such as requiring DNA synthesis labs to screen for lethal pathogens, have failed in Congress. 'The Republicans wouldn't collaborate because it would be a win for Biden,' he said. Hinton left Google in 2023, he insists because he felt too old for code debugging sessions, but also to speak more openly about AI's dangers. He still credits several major labs, including Anthropic and DeepMind, for taking safety seriously. Yet he worries about deep cuts to U.S. basic research funding, which he sees as the seedbed for future breakthroughs. 'The return on investment from funding basic research is huge. You'd only cut it if you didn't care about the long-term future.' Private labs can play a role. Hinton likens their potential to Bell Labs at its peak, but he argues that universities remain the best source of transformative ideas. Despite his warnings, Hinton finds reasons to be hopeful. He points to healthcare as an area where AI could make a decisive difference. By unlocking the rich but underused data in medical scans and patient records, AI might deliver faster diagnoses, more targeted drugs, and treatments tailored to each patient. As for erasing aging altogether, Hinton is doubtful. 'Living forever would be a big mistake. Do you want the world run by 200-year-old white men?' he asked with a wry smile. Still, he returns to his central belief: if we succeed in building AI with genuine care for its human 'children,' the species might not only survive superintelligence but also prosper under its watch. 'That'll be wonderful if we can make it work.'