logo
AI godfather warns AI could soon develop its own language and outsmart humans

AI godfather warns AI could soon develop its own language and outsmart humans

India Today4 days ago
Geoffrey Hinton, the man many call the Godfather of AI, has issued yet another cautionary note, and this time it sounds like something straight out of a scifi film. Speaking on the One Decision podcast, the Nobel Prizewinning scientist warned that artificial intelligence may soon develop a private language of its own, one that even its human creators won't be able to understand.advertisement'Right now, AI systems do what's called 'chain of thought' reasoning in English, so we can follow what it's doing,' Hinton explained. 'But it gets more scary if they develop their own internal languages for talking to each other.'That, he says, could take AI into uncharted and unnerving territory. Machines have already demonstrated the ability to produce 'terrible' thoughts, and there's no reason to assume those thoughts will always be in a language we can track.
Hinton's words carry weight. He is, after all, the 2024 Nobel Physics laureate whose early work on neural networks paved the way for today's deep learning models and largescale AI systems. Yet he says he didn't fully appreciate the dangers until much later in his career.'I should have realised much sooner what the eventual dangers were going to be,' he admitted. 'I always thought the future was far off and I wish I had thought about safety sooner.' Now, that delayed realisation fuels his advocacy.One of Hinton's biggest fears lies in how AI systems learn. Unlike humans, who must share knowledge painstakingly, digital brains can copy and paste what they know in an instant.'Imagine if 10,000 people learned something and all of them knew it instantly, that's what happens in these systems,' he explained on BBC News.This collective, networked intelligence means AI can scale its learning at a pace no human can match. Current models such as GPT4 already outstrip humans when it comes to raw general knowledge. For now, reasoning remains our stronghold – but that advantage, says Hinton, is shrinking fast.While he is vocal, Hinton says others in the industry are far less forthcoming. 'Many people in big companies are downplaying the risk,' he noted, suggesting their private worries aren't reflected in their public statements. One notable exception, he says, is Google DeepMind CEO Demis Hassabis, whom Hinton credits with showing genuine interest in tackling these risks.As for Hinton's highprofile exit from Google in 2023, he says it wasn't a protest. 'I left Google because I was 75 and couldn't program effectively anymore. But when I left, maybe I could talk about all these risks more freely,' he states.advertisementWhile governments roll out initiatives like the White House's new 'AI Action Plan', Hinton believes that regulation alone won't be enough.The real task, he argues, is to create AI that is 'guaranteed benevolent', a tall order, given that these systems may soon be thinking in ways no human can fully follow.- Ends
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

IAMAI flags ambiguities in data protection law; cautions impact on AI innovation
IAMAI flags ambiguities in data protection law; cautions impact on AI innovation

Economic Times

time7 minutes ago

  • Economic Times

IAMAI flags ambiguities in data protection law; cautions impact on AI innovation

The Internet and Mobile Association of India (IAMAI) flagged ambiguities related to handling personal data in the Digital Personal Data Protection (DPDP) Act, 2023, in a submission before the ministry of electronics and information technology (MeitY). The industry body said restrictions on using publicly available data for artificial intelligence (AI) training would increase compliance burdens on AI companies, hinder technological progress, and disproportionately affect startups and smaller AI firms. In the submission, IAMAI said the ambiguities 'surrounding the processing of publicly available personal data might pose practical challenges for AI companies, particularly those using large datasets for training their models.' For them to verify whether the publicly available data was voluntarily made available is 'practically unfeasible,' it added. 'Even where personal data is shared publicly to comply with a legal obligation, it may be re-shared or resurface online through various means well after the initial legal disclosure, making it difficult for AI companies to process such data,' IAMAI presented in its submission. IAMAI suggested amending the DPDP Act to remove obstacles to using publicly available personal data for AI training. As an interim measure, the government may consider exempting data fiduciaries from certain provisions of the DPDP Act when they are processing personal data solely for AI training or fine-tuning, it suggested. The DPDP Act is yet to be operationalised, after the ministry invited stakeholders' comments on the rules in January. Earlier this week, digital payment companies Google Pay, PhonePe, and Amazon Pay, and the National Payments Corporation of India (NPCI) sought exemption from provisions of the DPDP Act that require user consent for each transaction, citing increased compliance burden, especially severe for small platforms.

IAMAI flags ambiguities in data protection law; cautions impact on AI innovation
IAMAI flags ambiguities in data protection law; cautions impact on AI innovation

Time of India

time9 minutes ago

  • Time of India

IAMAI flags ambiguities in data protection law; cautions impact on AI innovation

Academy Empower your mind, elevate your skills The Internet and Mobile Association of India ( IAMAI ) flagged ambiguities related to handling personal data in the Digital Personal Data Protection (DPDP) Act, 2023, in a submission before the ministry of electronics and information technology ( MeitY ).The industry body said restrictions on using publicly available data for artificial intelligence (AI) training would increase compliance burdens on AI companies , hinder technological progress, and disproportionately affect startups and smaller AI the submission, IAMAI said the ambiguities 'surrounding the processing of publicly available personal data might pose practical challenges for AI companies, particularly those using large datasets for training their models.' For them to verify whether the publicly available data was voluntarily made available is 'practically unfeasible,' it added.'Even where personal data is shared publicly to comply with a legal obligation, it may be re-shared or resurface online through various means well after the initial legal disclosure, making it difficult for AI companies to process such data,' IAMAI presented in its suggested amending the DPDP Act to remove obstacles to using publicly available personal data for AI training. As an interim measure, the government may consider exempting data fiduciaries from certain provisions of the DPDP Act when they are processing personal data solely for AI training or fine-tuning, it DPDP Act is yet to be operationalised, after the ministry invited stakeholders' comments on the rules in this week, digital payment companies Google Pay, PhonePe, and Amazon Pay, and the National Payments Corporation of India (NPCI) sought exemption from provisions of the DPDP Act that require user consent for each transaction, citing increased compliance burden, especially severe for small platforms.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store