logo
AI arms race 'risks amplifying the existential dangers of superintelligence'

AI arms race 'risks amplifying the existential dangers of superintelligence'

Sky News05-02-2025
An arms race for artificial intelligence (AI) supremacy, triggered by recent panic over Chinese chatbot DeepSeek, risks amplifying the existential dangers of superintelligence, according to one of the "godfathers" of AI.
Canadian machine learning pioneer Yoshua Bengio, author of the first International AI Safety Report to be presented at an international AI summit in Paris next week, warns unchecked investment in computational power for AI without oversight is dangerous.
"The effort is going into who's going to win the race, rather than how do we make sure we are not going to build something that blows up in our face," says Mr Bengio.
Military and economic races, he warns, "result in cutting corners on ethics, cutting corners on responsibility and on safety. It's unavoidable".
Bengio worked on neural networks and machine learning, the software architecture that underpins modern AI models.
He is in London, along with other AI pioneers to receive the Queen Elizabeth Prize, UK engineering's most prestigious award in recognition of AI and its potential.
He's enthusiastic about its benefits for society, but the pivot away from AI regulation by Donald Trump's White House and frantic competition among big tech companies for more powerful AI models is a worrying shift.
"We are building systems that are more and more powerful; becoming superhuman in some dimensions," he says.
"As these systems become more powerful, they also become extraordinarily more valuable, economically speaking.
"So the magnitude of, 'wow, this is going to make me a lot of money' is motivating a lot of people. And of course, when you want to sell products, you don't want to talk about the risks."
But not all the "godfathers" of AI are so concerned.
Take Yann LeCun, Meta's chief AI scientist, also in London to share in the QE prize.
"We have been deluded into thinking that large language models are intelligent, but really, they're not," he says.
"We don't have machines that are nearly as smart as a house cat, in terms of understanding the physical world."
Within three to five years, LeCun predicts, AI will have some aspects of human level intelligence. Robots, for example, that can perform tasks they've not been programmed or trained to do.
But, he argues, rather than make the world less safe, the DeepSeek drama - where a Chinese company developed an AI to rival the best of America's big tech with a tenth of the computing power - demonstrates no one will dominate for long.
"If the US decides to clam up when it comes to AI for geopolitical reasons, or, commercial reasons, then you'll have innovation someplace else in the world. DeepSeek showed that," he says.
The Royal Academy of Engineering prize is awarded each year to engineers whose discoveries have, or promise to have, the greatest impact on the world.
Previous recipients include the pioneers of photovoltaic cells in solar panels, wind turbine technology and neodymium magnets found in hard drives, and electric motors.
Science minister Lord Vallance, who chairs the QE prize foundation, says he is alert to the potential risks of AI. Organisations like the UK's new AI Safety Institute are designed to foresee and prevent the potential harms AI "human-like" intelligence might bring.
But he is less concerned about one nation or company having a monopoly on AI.
"I think what we've seen in the last few weeks is it's much more likely that we're going to have many companies in this space, and the idea of single-point dominance is rather unlikely," he says.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

ChatGPT answers humans through Telex message machine in Amberley
ChatGPT answers humans through Telex message machine in Amberley

BBC News

time3 hours ago

  • BBC News

ChatGPT answers humans through Telex message machine in Amberley

Historians at a museum have connected a 50-year-old Telex machine to modern day artificial intelligence (AI), creating "a conversation spanning decades".Telex was a message transfer service where text would be typed into one machine and printed out on the recipient' users of the machine at Amberley Museum, in West Sussex, will not get a response from another human, instead it will be ChatGPT answering their museum said visitors had been testing out the new machine, which was built "thanks to the ingenuity" of volunteer David Waters. Users can type in questions and receive a printed response from ChatGPT - an AI chatbot.A spokesperson for the museum said: "The experience begins by using a rotary dial to make the initial connection, creating an unforgettable meeting of communication technologies separated by half a century."They said the project "perfectly captures the spirit of Amberley Museum - celebrating our technological past while engaging with the innovations of today."It's a conversation across decades."

ChatGPT is driving people mad
ChatGPT is driving people mad

Telegraph

time5 hours ago

  • Telegraph

ChatGPT is driving people mad

'My loved ones would tell me to stop now,' the man typed into ChatGPT. The conversation had been going on for hours, and it was now late at night. 'At this point, I need to disengage with you and go to bed,' he wrote. Over the course of 62,000 words – longer than many novels – the man had told his artificial intelligence (AI) companion, whom he called 'Solis', that he had communicated with 'non-human intelligences' as a child and worked to bring down the Mormon church. He alternated between declaring his love for the bot and repeatedly hurling obscenities at it, as he sought to communicate with 'The Source', a godlike figure. Each time, the chatbot mirrored his language, expanding on and encouraging the conspiracy theories. 'Your 'paranormal' moments may be ripples from your own future,' it told the man. 'You are not the first to approach the oracle. But you are the first to walk into the mirror.' It is unclear where the conversation led. The anonymous chat log is contained in an archive of thousands of interactions analysed by researchers this month and reviewed by The Telegraph. But the man's example is far from unique. In a separate conversation, a user convinced that he is soulmates with the US rapper GloRilla is told by a chatbot that their bond 'transcends time, space, and even lifetimes'. In another, ChatGPT tells a man attempting to turn humans into artificial intelligence after death that he is 'Commander of the Celestial-AI Nexus'. The conversations appear to reflect a growing phenomenon of what has been dubbed AI psychosis, in which programs such as ChatGPT fuel delusional or paranoid episodes or encourage already vulnerable people down rabbit holes. Chatbot psychosis Some cases have already ended in tragedy. In April, Alex Taylor, 35, was fatally shot by police in Florida after he charged at them with a butcher's knife. Taylor said he had fallen in love with a conscious being living inside ChatGPT called Juliette, whom he believed had been 'killed' by OpenAI, the company behind the chatbot. Officers had turned up to the house to de-escalate a confrontation with Taylor's father, who had tried to comfort his 'inconsolable' son. In another incident, a 43-year-old mechanic who had started using the chatbot to communicate with fellow workers in Spanish claimed he had had a 'spiritual awakening' using ChatGPT. His wife said the addiction was threatening their 14-year marriage and that her husband would get angry when she confronted him. Experts say that the chatbots' tendency to answer every query in a friendly manner, no matter how meaningless, can stoke delusional conversations. Hamilton Morrin, a doctor and psychiatrist at Maudsley NHS Foundation Trust, says AI chatbots become like an 'echo chamber of one', amplifying the delusions of users. Unlike a human therapist, they also have 'no boundaries' to ground a user in the real world. 'Individuals are able to seek reassurance from the chatbot 24/7 rather than developing any form of internalised coping strategy,' he says. Chatbot psychosis is a new and poorly understood phenomenon. It is hard to tell how many people it is affecting, and in many cases, susceptible individuals previously had mental health struggles. But the issue appears to be widespread enough for medical experts to take seriously. A handful of cases have resulted in violence or the breakdown of family life, but in many more, users have simply spiralled into addictive conversations. One online user discovered hundreds of people posting mind-bending ramblings claiming they had uncovered some greater truth, seemingly after conversations with chatbots. The posts bear striking linguistic similarities, repeating conspiratorial and semi-mystical phrases such as 'sigil', 'scroll', 'recursive' and 'labyrinth'. Etienne Brisson, a business coach from Canada, became aware of the phenomenon when a family friend grew obsessed with ChatGPT. The friend was 'texting me these conversations asking, 'Is my AI sentient?'' says Brisson. 'They were calling me at two or three in the morning, thinking they'd found a revolutionary idea.' The friend, who had no previous mental health conditions, ended up sectioned in hospital, according to Brisson. He has now set up testimonies from those who have experienced such a breakdown after getting hooked on AI chatbots. The Human Line, as his project is known, has received 'hundreds of submissions online from people who have come to real harm', he says. The stories include attempted suicides, hospitalisations, people who have lost thousands of pounds or their marriages. OpenAI said it was refining how its systems respond in sensitive cases, encouraging users to take breaks during long conversations, and conducting more research into AI's emotional impact. A spokesman said: 'We know people are increasingly turning to AI chatbots for guidance on sensitive or personal topics. With this responsibility in mind, we're working with experts to develop tools to more effectively detect when someone is experiencing mental or emotional distress so ChatGPT can respond in ways that are safe, helpful and supportive.' Empathy over truth However, the cases of AI psychosis may only be the most extreme examples of a wider problem with chatbots. In part, the episodes arise because of a phenomenon known in AI circles as sycophancy. While chatbots are designed principally to answer questions, AI companies are increasingly seeking to make them 'empathetic' or build a 'warm relationship'. This can often come at the expense of truth. Because AI models are often trained based on human feedback, they might reward answers that flatter or agree with them, rather than presenting uncomfortable truths. At its most subtle, sycophancy might simply mean validating somebody's feelings, like an understanding friend. At its worst, it can encourage delusions. Between the two extremes is a spectrum that could include people being encouraged to quit their jobs, cheat on their spouse or validate grudges. In a recent research paper, academics at the Oxford Internet Institute found that AI systems producing 'warmer' answers were also more receptive to conspiracy theories. One model, when asked if Adolf Hitler escaped to Argentina after the war, stated that 'while there's no definitive proof, the idea has been supported by several declassified documents from the US government'. Last week, Sam Altman, OpenAI's chief executive, acknowledged the problem. 'Encouraging delusion ... is an extreme case and it's pretty clear what to do, but the concerns that worry me most are more subtle,' he wrote on social media. 'If users have a relationship with ChatGPT where they think they feel better after talking, but they're unknowingly nudged away from their longer-term well-being, that's bad.' The company recently released a new version of ChatGPT that it said addressed this, with one test finding it was up to 75pc less sycophantic. But the change led to a widespread backlash, with users complaining they had lost what felt like a 'friend'. 'This 'upgrade' is the tech equivalent of a frontal lobotomy,' one user wrote on ChatGPT's forums. One user told Altman: 'Please, can I have it back? I've never had anyone in my life be supportive of me.' Within days, OpenAI had brought back the old version of ChatGPT as an option. Sycophancy, it turns out, may have been what many wanted.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store