logo
#

Latest news with #BBCNews.This

AI godfather warns AI could soon develop its own language and outsmart humans
AI godfather warns AI could soon develop its own language and outsmart humans

India Today

time03-08-2025

  • India Today

AI godfather warns AI could soon develop its own language and outsmart humans

Geoffrey Hinton, the man many call the Godfather of AI, has issued yet another cautionary note, and this time it sounds like something straight out of a scifi film. Speaking on the One Decision podcast, the Nobel Prizewinning scientist warned that artificial intelligence may soon develop a private language of its own, one that even its human creators won't be able to now, AI systems do what's called 'chain of thought' reasoning in English, so we can follow what it's doing,' Hinton explained. 'But it gets more scary if they develop their own internal languages for talking to each other.'That, he says, could take AI into uncharted and unnerving territory. Machines have already demonstrated the ability to produce 'terrible' thoughts, and there's no reason to assume those thoughts will always be in a language we can track. Hinton's words carry weight. He is, after all, the 2024 Nobel Physics laureate whose early work on neural networks paved the way for today's deep learning models and largescale AI systems. Yet he says he didn't fully appreciate the dangers until much later in his career.'I should have realised much sooner what the eventual dangers were going to be,' he admitted. 'I always thought the future was far off and I wish I had thought about safety sooner.' Now, that delayed realisation fuels his of Hinton's biggest fears lies in how AI systems learn. Unlike humans, who must share knowledge painstakingly, digital brains can copy and paste what they know in an instant.'Imagine if 10,000 people learned something and all of them knew it instantly, that's what happens in these systems,' he explained on BBC collective, networked intelligence means AI can scale its learning at a pace no human can match. Current models such as GPT4 already outstrip humans when it comes to raw general knowledge. For now, reasoning remains our stronghold – but that advantage, says Hinton, is shrinking he is vocal, Hinton says others in the industry are far less forthcoming. 'Many people in big companies are downplaying the risk,' he noted, suggesting their private worries aren't reflected in their public statements. One notable exception, he says, is Google DeepMind CEO Demis Hassabis, whom Hinton credits with showing genuine interest in tackling these for Hinton's highprofile exit from Google in 2023, he says it wasn't a protest. 'I left Google because I was 75 and couldn't program effectively anymore. But when I left, maybe I could talk about all these risks more freely,' he governments roll out initiatives like the White House's new 'AI Action Plan', Hinton believes that regulation alone won't be real task, he argues, is to create AI that is 'guaranteed benevolent', a tall order, given that these systems may soon be thinking in ways no human can fully follow.- Ends

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store