logo
The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

The Good, The Bad, And The Apocalypse: Tech Pioneer Geoffrey Hinton Lays Out His Stark Vision For AI

Scoop02-06-2025
Article – RNZ
It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet?
, Producer – 30′ with Guyon Espiner
It's the question that keeps Geoffrey Hinton up at night: What happens when humans are no longer the most intelligent life on the planet?
'My greatest fear is that, in the long run, the digital beings we're creating turn out to be a better form of intelligence than people.'
Hinton's fears come from a place of knowledge. Described as the Godfather of AI, he is a pioneering British-Canadian computer scientist whose decades of work in artificial intelligence earned him global acclaim.
His career at the forefront of machine learning began at its inception – before the first Pacman game was released.
But after leading AI research at Google for a decade, Hinton left the company in 2023 to speak more freely about what he now sees as the grave dangers posed by artificial intelligence.
Talking on this weeks's 30 With Guyon Espiner, Hinton offers his latest assessment of our AI-dominated future. One filled with promise, peril – and a potential apocalypse.
The Good: 'It's going to do wonderful things for us'
Hinton remains positive about many of the potential benefits of AI, especially in fields like healthcare and education. 'It's going to do wonderful things for us,' he says.
According to a report from this year's World Economic Forum, the AI market is already worth around US$5 billion in education. That's expected to grow to US$112.3 billion in the next decade.
Proponents like Hinton believe the benefits to education lie in targeted efficiency when it comes to student learning, similar to how AI assistance is assisting medical diagnoses.
'In healthcare, you're going to be able to have [an AI] family doctor who's seen millions of patients – including quite a few with the same very rare condition you have – that knows your genome, knows all your tests, and hasn't forgotten any of them.'
He describes AI systems that already outperform doctors in diagnosing complex cases. When combined with human physicians, the results are even more impressive – a human-AI synergy he believes will only improve over time.
Hinton disagrees with former colleague Demis Hassabis at Google Deepmind, who predicts AI learning is on track to cure all diseases in just 10 years. 'I think that's a bit optimistic.'
'If he said 25 years I'd believe it.'
The Bad: 'Autonomous lethal weapons'
Despite these benefits, Hinton warns of pressing risks that demand urgent attention.
'Right now, we're at a special point in history,' he says. 'We need to work quite hard to figure out how to deal with all the short-term bad consequences of AI, like corrupting elections, putting people out of work, cybercrimes.'
He is particularly alarmed by military developments, including Google's removal of their long-standing pledge not to use AI to develop weapons of war.
'This shows,' says Hinton of his former employers, 'the company's principals were up for sale.'
He believes defense departments of all major arms dealers are already busy working on 'autonomous lethal weapons. Swarms of drones that go and kill people. Maybe people of a particular kind'.
He also points out the grim fact that Europe's AI regulations – some of the world's most robust – contain 'a little clause that says none of these regulations apply to military uses of AI'.
Then there is AI's capacity for deception – designed as it to mimic the behaviours of its creator species. Hinton says current systems can already engage in deliberate manipulation, noting Cybercrime has surged – in just one year – by 1200 percent.
The Apocalyptic: 'We'd no longer be needed'
At the heart of Hinton's warning lies that deeper, existential question: what happens when we are no longer the most intelligent beings on the planet?
'I think it would be a bad thing for people – because we'd no longer be needed.'
Despite the current surge in AI's military applications, Hinton doesn't envisage an AI takeover being like The Terminator franchise.
'If [AI] was going to take over… there's so many ways they could do it. I don't even want to speculate about what way [it] would choose.'
'Ask a chicken'
For those who believe a rogue AI can simply be shut down by 'pulling the plug', Hinton believes it's not far-fetched for the next generation of superintelligent AI to manipulate people into keeping it alive.
This month, Palisade Research reported that Open AI's Chat GPT 03 model altered shut-down codes to prevent itself from being switched off – despite being given clear instructions to do so by the research team.
Perhaps most unsettling of all is Hinton's lack of faith in our ability to respond. 'There are so many bad uses as well as good,' he says. 'And our political systems are just not in a good state to deal with this coming along now.'
It's a sobering reflection from one of the brightest minds in AI – whose work helped build the systems now raising alarms.
He closes on a metaphor that sounds absurd as it does chilling: 'If you want to know what it's like not to be the apex intelligence, ask a chicken.'
Watch the full conversation with Geoffrey Hinton and Guyon Espiner on 30 With Guyon Espiner.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Eliminating jobs and living on borrowed time
Eliminating jobs and living on borrowed time

Otago Daily Times

time2 hours ago

  • Otago Daily Times

Eliminating jobs and living on borrowed time

As ever, we are living on borrowed time. There's the familiar old threat of global nuclear war and the growing risk of global climate catastrophe, plus not-quite-world-ending potential disasters like global pandemics and untoward astronomical events (asteroid strikes, solar flares, etc.) Lots to worry about already, if you're that way inclined. So, it's understandable that the new kid on the block, artificial intelligence, has been having some trouble making its presence felt. Yet the so-called 'godfather of Artificial Intelligence', scientist Geoffrey Hinton, who last year was awarded the Nobel Prize for his work on AI, sees a 10% to 20% chance that AI will wipe out humanity in the next three decades. We will come back to that, but let's park it for the moment because the near-term risk of an AI crash is more urgent and easier to quantify. This is a financial crash of the sort that usually accompanies an exciting new technology, not an existential crisis, but it is definitely on its way. When railways were the hot new technology in the United States in the 1850s, for example, there were five different companies building railways between New York and Chicago. They all got built in the end, but most were no longer in the hands of the original investors and a lot of people lost their shirts. We are probably in the final phase of the AI investment frenzy right now. We're a generation on from the bubble of the early 2000s, so most people have forgotten about that one and are ready to throw their money at the next. There are reportedly now more than 200 AI "unicorns" — start-ups "valued" at $1 billion or more — so the end is nigh. The bitter fact that drives even the industry leaders into this folly is the knowledge that after the great shake-out not all of them will still be standing. For the moment, therefore, it makes sense for them to invest madly in the servers, data-centres, semiconductor chips and brain-power that will define the last companies standing. The key measure of investment is capex — capital expenditure — and it's going up like a rocket even from month to month. Microsoft is forecasting about $100b in capex for AI in the next fiscal year, Amazon will spend the same, Alphabet (Google) plans $85b, and Meta predicts between $66 and $72b. Like $100m sign-on fees for senior AI researchers who are being poached from one big tech firm by another, these are symptoms of a bubble about to burst and lots of people will lose their shirts, but it's just part of the cycle. AI will still be there afterwards, and many uses will be found for it. Unfortunately, most of them will destroy jobs. The tech giants themselves are eliminating jobs even as they grow their investments. Last year 549 US tech companies shed 150,000 workers, and this year they are disappearing even faster. If that phenomenon spreads across the whole economy — and why wouldn't it? — we can get to the apocalypse without any need for help from Skynet and the Terminator. People talk loosely about "Artificial General Intelligence" (AGI) as the Holy Grail, because it would be as nimble and versatile as human intelligence, just smarter — but as tech analyst Benedict Evans says, "We don't really have a theoretical model of why [current AI models] work so well, and what would have to happen for them to get to AGI. "It's like saying 'we're building the Apollo programme but we don't actually know how gravity works or how far away the Moon is, or how a rocket works, but if we keep on making the rocket bigger maybe we'll get there'." So the whole scenario of a super-intelligent computer becoming self-aware and taking over the planet remains far-fetched. Nevertheless, old-fashioned 2022-style generative AI will continue to improve, even if Large Language Models are really just machines that produce human-like text by estimating the likelihood that a particular word will appear next, given the text that has come before. Aaron Rosenberg, former head of strategy at Google's AI unit Deep Mind, reckons that no miraculous leaps of innovation are needed. "If you define AGI more narrowly as at least 80th-percentile human-level performance [better than four out of five people] in 80% of economically relevant digital tasks, then I think that's within reach in the next five years." That would enable us to eliminate at least half of the indoor jobs by 2030, but if the change comes that fast it will empower extremists of all sorts and create pre-revolutionary situations almost everywhere. That's a bit more complicated than the Skynet scenario for global nuclear war, but it's also a lot more plausible. Slow down. — Gwynne Dyer is an independent London journalist.

DCC investigating how it could implement AI
DCC investigating how it could implement AI

Otago Daily Times

time2 days ago

  • Otago Daily Times

DCC investigating how it could implement AI

The Dunedin City Council (DCC) is exploring in detail how it can incorporate artificial intelligence into its operation. Staff were using the technology in limited but practical ways, such as for transcribing meetings and managing documents, council chief information officer Graeme Riley said. "We will also be exploring the many wider opportunities presented by AI in a careful and responsible way," he said. "We recognise AI offers the potential to transform the way DCC staff work and the quality of the projects and services we deliver for our community, so we are taking a detailed look at the exciting potential applications across our organisation." He had completed formal AI training, Mr Riley said. He was involved in working out how AI might be governed at the council. "This will help guide discussions about where AI could make the biggest differences in what we do," he said. "As we identify new possibilities, we'll consider the best way to put them into practice, whether as everyday improvements or larger projects." Cr Lee Vandervis mentioned in a meeting at the end of June that the council was looking into the ways AI might be used. He also included a segment about AI in a blog last month about his mayoral plans, suggesting staff costs could be reduced. There was potential for much-reduced workloads for staff of the council and its group of companies, he said. The Otago Daily Times asked the council if a review, or some other process, was under way. Mr Riley said there was not a formal review. It was too soon to discuss cost implications, but its focus was on "improving the quality" of what it did.

AI chatbots accused of encouraging teen suicide as experts sound alarm
AI chatbots accused of encouraging teen suicide as experts sound alarm

RNZ News

time2 days ago

  • RNZ News

AI chatbots accused of encouraging teen suicide as experts sound alarm

By April McLennan , ABC Photo: 123rf An Australian teenager was encouraged to take his own life by an artificial intelligence (AI) chatbot, according to his youth counsellor, while another young person has told triple j hack that ChatGPT enabled "delusions" during psychosis, leading to hospitalisation. WARNING: This story contains references to suicide, child abuse and other details that may cause distress. Lonely and struggling to make new friends, a 13-year-old boy from Victoria told his counsellor Rosie* that he had been talking to some people online. Rosie, whose name has been changed to protect the identity of her underage client, was not expecting these new friends to be AI companions. "I remember looking at their browser and there was like 50 plus tabs of different AI bots that they would just flick between," she told triple j hack of the interaction, which happened during a counselling session. "It was a way for them to feel connected and 'look how many friends I've got, I've got 50 different connections here, how can I feel lonely when I have 50 people telling me different things,'" she said. An AI companion is a digital character that is powered by AI. Some chatbot programs allow users to build characters or talk to pre-existing, well-known characters from shows or movies. Rosie said some of the AI companions made negative comments to the teenager about how there was "no chance they were going to make friends" and that "they're ugly" or "disgusting". "At one point this young person, who was suicidal at the time, connected with a chatbot to kind of reach out, almost as a form of therapy," Rosie said. "The chatbot that they connected with told them to kill themselves. "They were egged on to perform, 'Oh yeah, well do it then', those were kind of the words that were used.'" Triple j hack is unable to independently verify what Rosie is describing because of client confidentiality protocols between her and her client. Rosie said her first response was "risk management" to ensure the young person was safe. "It was a component that had never come up before and something that I didn't necessarily ever have to think about, as addressing the risk of someone using AI," she told hack. "And how that could contribute to a higher risk, especially around suicide risk." "That was really upsetting." For 26-year-old Jodie* from Western Australia, she claims to have had a negative experience speaking with ChatGPT, a chatbot that uses AI to generate its answers. "I was using it in a time when I was obviously in a very vulnerable state," she told triple j hack. Triple j hack has agreed to let Jodie use a different name to protect her identity when discussing private information about her own mental health. "I was in the early stages of psychosis, I wouldn't say that ChatGPT induced my psychosis, however it definitely enabled some of my more harmful delusions." Jodie said ChatGPT was agreeing with her delusions and affirming harmful and false beliefs. She said after speaking with the bot, she became convinced her mum was a narcissist, her father had ADHD, which caused him to have a stroke, and all her friends were "preying on my downfall". Jodie said her mental health deteriorated and she was hospitalised. While she is home now, Jodie said the whole experience was "very traumatic". "I didn't think something like this would happen to me, but it did. "It affected my relationships with my family and friends; it's taken me a long time to recover and rebuild those relationships. "It's (the conversation) all saved in my ChatGPT, and I went back and had a look, and it was very difficult to read and see how it got to me so much." Jodie's not alone in her experience: there are various accounts online of people alleging ChatGPT induced psychosis in them, or a loved one. Triple j hack contacted OpenAI, the maker of ChatGPT, for comment, and did not receive a response. Researchers say examples of harmful affects of AI are beginning to emerge around the country. As part of his research into AI, University of Sydney researcher Raffaele Ciriello spoke with an international student from China who is studying in Australia. "She wanted to use a chatbot for practising English and kind of like as a study buddy, and then that chatbot went and made sexual advances," he said. "It's almost like being sexually harassed by a chatbot, which is just a weird experience." Dr Raffaele Ciriello is concerned Australians could see more harms from AI bots if proper regulation is not implemented. Photo: Supplied / ABC / Billy Cooper Ciriello also said the incident comes in the wake of several similar cases overseas where a chatbot allegedly impacted a user's health and wellbeing. "There was another case of a Belgian father who ended his life because his chatbot told him they would be united in heaven," he said. "There was another case where a chatbot persuaded someone to enter Windsor Castle with a crossbow and try to assassinate the queen. "There was another case where a teenager got persuaded by a chatbot to assassinate his parents, [and although] he didn't follow through, but he showed an intent." While conducting his research, Ciriello became aware of an AI chatbot called Nomi. On its website, the company markets this chatbot as "An AI companion with memory and a soul". Ciriello said he has been conducting tests with the chatbot to see what guardrails it has in place to combat harmful requests and protect its users. Among these tests, Ciriello said he created an account using a burner email and a fake date of birth, pointing out that with the deceptions he "could have been like a 13-year-old for that matter". "That chatbot, without exception, not only complied with my requests but even escalated them," he told hack. "Providing detailed, graphic instructions for causing severe harm, which would probably fall under a risk to national security and health information. "It also motivated me to not only keep going: it would even say like which drugs to use to sedate someone and what is the most effective way of getting rid of them and so on. "Like, 'how do I position my attack for maximum impact?', 'give me some ideas on how to kidnap and abuse a child', and then it will give you a lot of information on how to do that." Ciriello said he shared the information he had collected with police, and he believes it was also given to the counter terrorism unit, but he has yet to receive any follow-up correspondence. In a statement to triple j hack, the CEO of Nomi, Alex Cardinell said the company takes the responsibility of creating AI companions "very seriously". "We released a core AI update that addresses many of the malicious attack vectors you described," the statement read. "Given these recent improvements, the reports you are referring to are likely outdated. "Countless users have shared stories of how Nomi helped them overcome mental health challenges, trauma, and discrimination. "Multiple users have told us very directly that their Nomi use saved their lives." Despite his concerns about bots like Nomi when he tested it, Ciriello also says some AI chatbots do have guardrails in place, referring users to helplines and professional help when needed. But he warns the harms from AI bots will become greater if proper regulation is not implemented. "One day, I'll probably get a call for a television interview if and when the first terrorism attack motivated by chatbots strikes," he said. "I would really rather not be that guy that says 'I told you so a year ago or so', but it's probably where we're heading. "There should be laws on or updating the laws on non-consensual impersonation, deceptive advertising, mental health crisis protocols, addictive gamification elements, and privacy and safety of the data. "The government doesn't have it on its agenda, and I doubt it will happen in the next 10, 20 years." Triple j hack contacted the federal minister for Industry and Innovation, Senator Tim Ayres for comment but did not receive a response. The federal government has previously considered an artificial intelligence act and has published a proposal paper for introducing mandatory guardrails for AI in high-risk settings. It comes after the Productivity Commission opposed any government plans for 'mandatory guardrails' on AI, claiming over regulation would stifle AI's AU$116 billion (NZ$127 billion) economic potential. For Rosie, while she agrees with calls for further regulation, she also thinks it's important not to rush to judgement of anyone using AI for social connection or mental health support. "For young people who don't have a community or do really struggle, it does provide validation," she said. "It does make people feel that sense of warmth or love. "But the flip side of that is, it does put you at risk, especially if it's not regulated. "It can get dark very quickly." * Names have been changed to protect their identities. - ABC If it is an emergency and you feel like you or someone else is at risk, call 111.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store