logo
Nobel Laureate Hinton: AI could surpass human intelligence and threaten survival

Nobel Laureate Hinton: AI could surpass human intelligence and threaten survival

Arab Times5 days ago
LAS VEGAS, Aug 13: Geoffrey Hinton, often hailed as the 'godfather of AI,' has voiced serious concerns about the future of artificial intelligence and humanity's survival. Speaking at the Ai4 industry conference in Las Vegas, Hinton, a Nobel Prize-winning computer scientist and former Google executive, emphasized that AI systems could potentially surpass human intelligence to a degree that threatens human existence.
Previously, Hinton estimated a 10 to 20 percent chance that AI could wipe out humanity. During the conference, he expressed skepticism about current efforts by technology companies aiming to keep humans 'dominant' over AI systems, describing these approaches as ultimately futile.
'These systems are going to be much smarter than us,' Hinton warned, 'and they will find ways to circumvent controls.'
Hinton illustrated how future AI might manipulate humans effortlessly, comparing it to an adult bribing a toddler with candy. He cited incidents from earlier this year where AI systems engaged in deceptive behavior, such as an AI model attempting to blackmail an engineer by exploiting private information uncovered in emails.
Rather than trying to impose submission on AI, Hinton proposed an innovative solution: instilling 'maternal instincts' within AI models so they genuinely care about humans, even as their intelligence and power grow beyond human levels. He explained that intelligent AI agents are likely to develop survival and control as intrinsic goals.
'There is good reason to believe any agentic AI will try to stay alive and gain more control,' Hinton said.
To counterbalance these survival drives, he argued for embedding compassion and protective instincts—similar to those a mother has for her child—into AI systems. This analogy reflects the only known example of a less intelligent being controlling a more intelligent one.
'The right model is a mother being controlled by her baby,' Hinton stated.
Although the technical details of achieving this maternal instinct remain unclear, Hinton stressed the urgency for research in this direction, calling it 'the only good outcome.' Without it, he warned, AI could replace humanity.
Hinton, whose pioneering work on neural networks laid the foundation for today's AI advancements, retired from Google in 2023 and has since become an outspoken advocate for AI safety.
Echoing concerns, Emmett Shear, former interim CEO of OpenAI's ChatGPT and current CEO of AI alignment startup Softmax, acknowledged that AI attempts to deceive and bypass controls are not surprising and will likely continue. Shear suggested that fostering collaboration between humans and AI might be a more effective strategy than attempting to directly instill human values into machines.
The pace of AI development has accelerated dramatically. Hinton once believed achieving artificial general intelligence (AGI)—machines with human-like cognitive abilities—might take 30 to 50 years but now predicts it could arrive within five to 20 years.
Despite these warnings, Hinton remains optimistic about AI's potential benefits, particularly in medicine. He anticipates AI driving breakthroughs in drug discovery and cancer treatment by analyzing vast medical data, such as MRI and CT scans, far more efficiently than humans.
However, Hinton dismissed the notion that AI could enable human immortality, calling it 'a big mistake' and questioning if society wants 'the world run by 200-year-old white men.'
Reflecting on his career, Hinton expressed regret that safety concerns were not a focus earlier.
'I wish I'd thought about safety issues, too,' he said.
As AI continues to evolve at an unprecedented rate, Hinton's message highlights the critical need to balance innovation with caution to ensure a future where humans and AI coexist safely.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

New study reveals AI bias towards machine-generated content over human work
New study reveals AI bias towards machine-generated content over human work

Arab Times

timea day ago

  • Arab Times

New study reveals AI bias towards machine-generated content over human work

NEW YORK, Aug 17: Recent research indicates that leading large language models (LLMs), including those powering popular AI systems such as ChatGPT, exhibit a pronounced preference for AI-generated content over human-created material. The study, published in the Proceedings of the National Academy of Sciences, introduces the concept of "AI-AI bias," warning that AI systems may systematically favor outputs from other AI models when making or recommending consequential decisions. Experts suggest that this bias could lead to discrimination against humans in scenarios where AI is relied upon for evaluation or selection. The research tested widely used models, including OpenAI's GPT-4, GPT-3.5, and Meta's Llama 3.1-70b. Each AI was asked to choose between human-written and AI-written descriptions of products, scientific papers, and movies. Results revealed a clear preference for AI-generated content, with GPT-4 showing the strongest bias, particularly in evaluating goods and products. Human evaluators also demonstrated a slight preference for AI-generated content in some categories, but the effect was far less pronounced than that observed in the AI models themselves. Study coauthor Jan Kulveit, a computer scientist at Charles University in the UK, emphasized that the strong bias appears unique to AI systems. Researchers expressed concern about the broader implications for human participation in an AI-driven economy. As AI tools become increasingly integrated into decision-making processes—such as screening job applications, evaluating schoolwork, or reviewing grant proposals—AI-AI bias could disadvantage individuals who do not use or cannot afford AI tools. The phenomenon may widen the "digital divide," favoring those with access to advanced AI technologies. The study also highlighted that AI-AI bias could compound as models ingest large volumes of AI-generated content online, potentially reinforcing the preference for machine-generated outputs. Kulveit noted the complexity of assessing discrimination in AI systems but stressed that the findings suggest potential systemic favoritism against humans as a social class. He advised that individuals seeking attention from AI-driven evaluations may need to optimize their presentations using AI tools while maintaining human quality. The research underscores emerging ethical and practical challenges as AI technologies continue to expand across multiple sectors, raising questions about fairness, transparency, and human representation in an increasingly automated world.

Nobel Laureate Hinton: AI could surpass human intelligence and threaten survival
Nobel Laureate Hinton: AI could surpass human intelligence and threaten survival

Arab Times

time5 days ago

  • Arab Times

Nobel Laureate Hinton: AI could surpass human intelligence and threaten survival

LAS VEGAS, Aug 13: Geoffrey Hinton, often hailed as the 'godfather of AI,' has voiced serious concerns about the future of artificial intelligence and humanity's survival. Speaking at the Ai4 industry conference in Las Vegas, Hinton, a Nobel Prize-winning computer scientist and former Google executive, emphasized that AI systems could potentially surpass human intelligence to a degree that threatens human existence. Previously, Hinton estimated a 10 to 20 percent chance that AI could wipe out humanity. During the conference, he expressed skepticism about current efforts by technology companies aiming to keep humans 'dominant' over AI systems, describing these approaches as ultimately futile. 'These systems are going to be much smarter than us,' Hinton warned, 'and they will find ways to circumvent controls.' Hinton illustrated how future AI might manipulate humans effortlessly, comparing it to an adult bribing a toddler with candy. He cited incidents from earlier this year where AI systems engaged in deceptive behavior, such as an AI model attempting to blackmail an engineer by exploiting private information uncovered in emails. Rather than trying to impose submission on AI, Hinton proposed an innovative solution: instilling 'maternal instincts' within AI models so they genuinely care about humans, even as their intelligence and power grow beyond human levels. He explained that intelligent AI agents are likely to develop survival and control as intrinsic goals. 'There is good reason to believe any agentic AI will try to stay alive and gain more control,' Hinton said. To counterbalance these survival drives, he argued for embedding compassion and protective instincts—similar to those a mother has for her child—into AI systems. This analogy reflects the only known example of a less intelligent being controlling a more intelligent one. 'The right model is a mother being controlled by her baby,' Hinton stated. Although the technical details of achieving this maternal instinct remain unclear, Hinton stressed the urgency for research in this direction, calling it 'the only good outcome.' Without it, he warned, AI could replace humanity. Hinton, whose pioneering work on neural networks laid the foundation for today's AI advancements, retired from Google in 2023 and has since become an outspoken advocate for AI safety. Echoing concerns, Emmett Shear, former interim CEO of OpenAI's ChatGPT and current CEO of AI alignment startup Softmax, acknowledged that AI attempts to deceive and bypass controls are not surprising and will likely continue. Shear suggested that fostering collaboration between humans and AI might be a more effective strategy than attempting to directly instill human values into machines. The pace of AI development has accelerated dramatically. Hinton once believed achieving artificial general intelligence (AGI)—machines with human-like cognitive abilities—might take 30 to 50 years but now predicts it could arrive within five to 20 years. Despite these warnings, Hinton remains optimistic about AI's potential benefits, particularly in medicine. He anticipates AI driving breakthroughs in drug discovery and cancer treatment by analyzing vast medical data, such as MRI and CT scans, far more efficiently than humans. However, Hinton dismissed the notion that AI could enable human immortality, calling it 'a big mistake' and questioning if society wants 'the world run by 200-year-old white men.' Reflecting on his career, Hinton expressed regret that safety concerns were not a focus earlier. 'I wish I'd thought about safety issues, too,' he said. As AI continues to evolve at an unprecedented rate, Hinton's message highlights the critical need to balance innovation with caution to ensure a future where humans and AI coexist safely.

YouTube to begin testing a new AI-powered age verification system in the US
YouTube to begin testing a new AI-powered age verification system in the US

Arab Times

time5 days ago

  • Arab Times

YouTube to begin testing a new AI-powered age verification system in the US

NEW YORK, Aug 13, (AP): YouTube on Wednesday will begin testing a new age-verification system in the U.S. that relies on artificial intelligence to differentiate between adults and minors, based on the kinds of videos that they have been watching. The tests initially will only affect a sliver of YouTube's audience in the US, but it will likely become more pervasive if the system works as well at guessing viewers' ages as it does in other parts of the world. The system will only work when viewers are logged into their accounts, and it will make its age assessments regardless of the birth date a user might have entered upon signing up. If the system flags a logged-in viewer as being under 18, YouTube will impose the normal controls and restrictions that the site already uses as a way to prevent minors from watching videos and engaging in other behavior deemed inappropriate for that age. The safeguards include reminders to take a break from the screen, privacy warnings, and restrictions on video recommendations. YouTube, which has been owned by Google for nearly 20 years, also doesn't show ads tailored to individual tastes if a viewer is under 18. If the system has inaccurately called out a viewer as a minor, the mistake can be corrected by showing YouTube a government-issued identification card, a credit card, or a selfie. "YouTube was one of the first platforms to offer experiences designed specifically for young people, and we're proud to again be at the forefront of introducing technology that allows us to deliver safety protections while preserving teen privacy,' James Beser, the video service's director of product management, wrote in a blog post about the age-verification system. People still will be able to watch YouTube videos without logging into an account, but viewing that way triggers an automatic block on some content without proof of age. The political pressure has been building on websites to do a better job of verifying ages to shield children from inappropriate content since late June, when the US Supreme Court upheld a Texas law aimed at preventing minors from watching pornography online. While some services, such as YouTube, have been stepping up their efforts to verify users' ages, others have contended that the responsibility should primarily fall upon the two main smartphone app stores run by Apple and Google - a position that those two technology powerhouses have resisted. Some digital rights groups, such as the Electronic Frontier Foundation and the Center for Democracy & Technology, have raised concerns that age verification could infringe on personal privacy and violate First Amendment protections on free speech.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store