logo
Heavy ChatGPT users tend to be more lonely, suggests research

Heavy ChatGPT users tend to be more lonely, suggests research

The Guardian25-03-2025

Heavy users of ChatGPT tend to be lonelier, more emotionally dependent on the AI tool and have fewer offline social relationships, new research suggests.
Only a small number of users engage emotionally with ChatGPT, but those who do are among the heaviest users, according to a pair of studies from OpenAI and the MIT Media Lab.
The researchers wrote that the users who engaged in the most emotionally expressive personal conversations with the chatbots tended to experience higher loneliness – though it isn't clear if this is caused by the chatbot or because lonely people are seeking emotional bonds.
While the researchers have stressed that the studies are preliminary, they ask pressing questions about how AI chatbot tools, which according to OpenAI is used by more than 400 million people a week, are influencing people's offline lives.
The researchers, who plan to submit both studies to peer-reviewed journals, found that participants who 'bonded' with ChatGPT – typically in the top 10% for time spent with the tool – were more likely than others to be lonely, and to rely on it more.
The researchers established a complex picture in terms of the impact. Voice-based chatbots initially appeared to help mitigate loneliness compared with text-based chatbots, but this advantage started to slip the more someone used them.
After using the chatbot for four weeks, female study participants were slightly less likely to socialise with people than their male counterparts. Participants who interacted with ChatGPT's voice mode in a gender that was not their own for their interactions reported significantly higher levels of loneliness and more emotional dependency on the chatbot at the end of the experiment.
In the first study, the researchers analysed real-world data from close to 40m interactions with ChatGPT, and then asked the 4,076 users who had had those interactions how they had made them feel.
For the second study, the Media Lab recruited almost 1,000 people to take part in an in-depth four-week trial examining how participants interacted with ChatGPT for a minimum of five minutes each day. Participants then completed a questionnaire to measure their feelings of loneliness, levels of social engagement, and emotional dependence on the bot.
The findings echo earlier research, for example in 2023 MIT Media Lab researchers found that chatbots tended to mirror the emotional sentiment of a user's messages – happier messages led to happier responses.
Dr Andrew Rogoyski, a director at the Surrey Institute for People-Centred Artificial Intelligence, said that because peoplewere hard-wired to to think of a machine behaving in human-like ways as a human, AI chatbots could be 'dangerous', and far more research was needed to understand their social and emotional impacts.
'In my opinion, we are doing open-brain surgery on humans, poking around with our basic emotional wiring with no idea of the long-term consequences. We've seen some of the downsides of social media – this is potentially much more far-reaching,' he said.
Dr Theodore Cosco, a researcher at the University of Oxford, said the research raised 'valid concerns about heavy chatbot usage', though he noted it 'opens the door to exciting and encouraging possibilities'.
'The idea that AI systems can offer meaningful support — particularly for those who may otherwise feel isolated — is worth exploring. However, we must be thoughtful and intentional in how we integrate these tools into everyday life.'
Dr Doris Dippold, who researches intercultural communication at the University of Surrey, said it would be important to establish what caused emotional dependence on chatbots. 'Are they caused by the fact that chatting to a bot ties users to a laptop or a phone and therefore removes them from authentic social interaction? Or is it the social interaction, courtesy of ChatGPT or another digital companion, which makes people crave more?'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Campainers urge UK watchdog to limit use of AI after report of Meta's plan to automate checks
Campainers urge UK watchdog to limit use of AI after report of Meta's plan to automate checks

The Guardian

time4 hours ago

  • The Guardian

Campainers urge UK watchdog to limit use of AI after report of Meta's plan to automate checks

Internet safety campaigners have urged the UK's communications watchdog to limit the use of artificial intelligence in crucial risk assessments following a report that Mark Zuckerberg's Meta was planning to automate checks. Ofcom said it was 'considering the concerns' raised by the letter following a report last month that up to 90% of all risk assessments at the owner of Facebook, Instagram and WhatsApp would soon be carried out by AI. Social media platforms are required under the UK's Online Safety Act to gauge how harm could take place on their services and how they plan to mitigate those potential harms – with a particular focus on protecting child users and preventing illegal content from appearing. The risk assessment process is viewed as key aspect of the act. In a letter to Ofcom's chief executive, Dame Melanie Dawes, organisations including the Molly Rose Foundation, the NSPCC and the Internet Watch Foundation described the prospect of AI-driven risk assessments as a 'retrograde and highly alarming step'. 'We urge you to publicly assert that risk assessments will not normally be considered as 'suitable and sufficient', the standard required by … the Act, where these have been wholly or predominantly produced through automation.' The letter also urged the watchdog to 'challenge any assumption that platforms can choose to water down their risk assessment processes'. A spokesperson for Ofcom said: 'We've been clear that services should tell us who completed, reviewed and approved their risk assessment. We are considering the concerns raised in this letter and will respond in due course.' Sign up to TechScape A weekly dive in to how technology is shaping our lives after newsletter promotion Meta said the letter deliberately misstated the company's approach on safety and it was committed to high standards and complying with regulations. 'We are not using AI to make decisions about risk,' said a Meta spokesperson. 'Rather, our experts built a tool that helps teams identify when legal and policy requirements apply to specific products. We use technology, overseen by humans, to improve our ability to manage harmful content and our technological advancements have significantly improved safety outcomes.' The Molly Rose Foundation organised the letter after NPR, a US broadcaster, reported last month that updates to Meta's algorithms and new safety features will mostly be approved by an AI system and no longer scrutinised by staffers. According to one former Meta executive, who spoke to NPR anonymously, the change will allow the company to launch app updates and features on Facebook, Instagram and WhatsApp more quickly but would create 'higher risks' for users, because potential problems are less likely to be prevented before a new product is released to the public. NPR also reported that Meta was considering automating reviews for sensitive areas including youth risk and monitoring the spread of falsehoods.

Minister says AI ‘does lie' but defends Government amid copyright row
Minister says AI ‘does lie' but defends Government amid copyright row

South Wales Guardian

time4 hours ago

  • South Wales Guardian

Minister says AI ‘does lie' but defends Government amid copyright row

Peter Kyle acknowledged the technology was 'not flawless' as he insisted the Government would 'never sell downstream' the rights of artists in the UK. He also said he had 'mistakenly' said his preferred option on AI and copyright was requiring rights-holders to 'opt out' of their material being used by tech companies, and had since 'gone back to the drawing board'. Ministers have faced a backlash from major figures in the creative industries over their approach to copyright, with Sir Elton John this week describing the situation as an 'existential issue.' The Government is locked in a standoff with the House of Lords, which has demanded artists to be offered immediate copyright protection as an amendment to the Data (Use and Access) Bill. Peers have attempted to change the legislation by adding a commitment to introduce transparency requirements aimed at ensuring rights-holders are able to see when their work has been used and by whom. Asked about the risk of AI producing unreliable information, Mr Kyle said 'people need to understand that AI is not flawless, and that AI does lie because it's based on human characteristics'. 'Now it is getting more precise as we move forward. It's getting more powerful as we move forward,' he told Sky News's Sunday Morning With Trevor Phillips. 'But as with every single technology that comes into society, you can only safely use it and wisely use it by understanding how it works.' He added: 'We are going to legislate for AI going forward and we're going to balance it with the same legislation that we'll bring in to modernise the copyright legislation as well.' The Government has said it will address copyright issues as a whole after the more than 11,500 responses to its consultation on the impact of AI have been reviewed, rather than in what it has branded 'piecemeal' legislation. Among the proposals had been a suggestion that tech companies could be given free access to British music, films, books in order to train AI models without permission or payment, with artists required to 'opt-out' if they do not want their work to be used. Asked about the prospect of an opt-out clause, Mr Kyle told the BBC's Sunday With Laura Kuenssberg programme: 'I always had on the table from the outset an opt-out clause. 'But I mistakenly said this was my preferred option that had more prominence than perhaps some of the creatives wanted it to have, and I've now sort of gone back to the drawing board on that, because I am listening to what people want.' Last month hundreds of stars including Sir Elton, Sir Paul McCartney and Kate Bush signed a joint letter to Sir Keir Starmer urging the Prime Minister to introduce safeguards against work being plundered for free.

US attacks on science and research a ‘great gift' to China on artificial intelligence, former OpenAI board member says
US attacks on science and research a ‘great gift' to China on artificial intelligence, former OpenAI board member says

The Guardian

time5 hours ago

  • The Guardian

US attacks on science and research a ‘great gift' to China on artificial intelligence, former OpenAI board member says

The US administration's targeting of academic research and international students is a 'great gift' to China in the race to compete on artificial intelligence, former OpenAI board member Helen Toner has said. The director of strategy at Georgetown's Center for Security and Emerging Technology (CSET) joined the board of OpenAI in 2021 after a career studying AI and the relationship between the United States and China. Toner, a 33-year-old University of Melbourne graduate, was on the board for two years until a falling out with founder Sam Altman in 2023. Altman was fired by the board over claims that he was not 'consistently candid' in his communications and the board did not have confidence in Altman's ability to lead. The chaotic months that followed saw Altman fired and then re-hired with three members of the board, including Toner, ousted instead. They will soon also be the subject of a planned film, with the director of Challengers and Call Me By Your Name, Luca Guadagnino, reportedly in talks to direct. The saga, according to Time magazine – which named her one of the Top 100 most influential people on AI in 2024 – resulted in the Australian having 'the ear of policymakers around the world trying to regulate AI'. At CSET, Toner has a team of 60 people working on AI research for white papers or briefing policymakers focused on the use of AI in the military, workforce, biosecurity and cybersecurity sectors. 'A lot of my work focuses on some combination of AI, safety and security issues, the Chinese AI ecosystem and also what gets called frontier AI,' Toner said. Toner said the United States is concerned about losing the AI race to China and while US chip export controls make it harder for China to get compute power to compete with the US, the country was still making a 'serious push' on AI, as highlighted by the surprise success of Chinese generative AI model DeepSeek earlier this year. The Trump administration's attacks on research and bans on international students are a 'gift' to China in the AI race with the US, Toner said. 'Certainly it's a great gift to [China] the way that the US is currently attacking scientific research, and foreign talent – which is a huge proportion of the USA workforce – is immigrants, many of them coming from China,' she said. Sign up for Guardian Australia's breaking news email 'That is a big … boon to China in terms of competing with the US.' The AI boom has led to claims and concerns about a job wipeout caused by companies using AI to replace work that had otherwise been done by humans. Dario Amodei, the CEO of Anthropic, the company behind the generative AI model Claude, told Axios last week that AI could reduce entry-level white-collar jobs by 50% and result in 20% unemployment in the next five years. Toner said Amodei 'often says things that seem directionally right to me, but in terms of … timeline and numbers often seem quite aggressive' but added that disruption in the jobs market had already started to show. 'The kind of things that [language model-based AI] can do best at the moment … if you can give them a bite-size task – not a really long term project, but something that you might not need ages and ages to do and something where you still need human review,' she said. 'That's a lot of the sort of work that you give to interns or new grads in white-collar industries.' Experts have suggested companies that invested heavily in AI are now being pressed to show the results of that investment. Toner said while the real-world use of AI can generate a lot of value, it is less clear what business models and which players will benefit from that value. Dominant uses might be a mix of different AI services plugged into existing applications – like phone keyboards that can now transcribe voices – as well as stand-alone chatbots, but it's 'up in the air' which type of AI would actually dominate, she said. Sign up to Breaking News Australia Get the most important news as it breaks after newsletter promotion Turner said the push for profitability was less risky than the overall race to be first in AI advancements. 'It means that these companies are all making it up as they go along and figuring out as they go how to make trade-offs between getting products out the door, doing extra testing, putting in extra guardrails, putting in measures that are supposed to make the model more safe but also make it more annoying to use,' she said. 'They're figuring that all out on the fly, and … they're making those decisions while under pressure to go as fast as they can.' Turrner said she was worried about the idea of 'gradual disempowerment to AI' – 'meaning a world where we just gradually hand over more control over different parts of society and the economy and government to AI systems, and then realise a bit too late that it's not going the way that we wanted, but we can't really turn back'. She is most optimistic about AI's use in improving science and drug discovery and for self-driving services like Waymo in reducing fatalities on the roads. 'With AI, you never want to be looking for making the AI perfect, you want it to be better than the alternative. And when it comes to cars, the alternative is thousands of people dying per year. 'If you can improve on that, that's amazing. You're saving many, many people.' Toner joked that her friends had been sending her options on who might play her in the film. 'Any of the names that friends of mine have thrown my way are all these incredibly beautiful actresses,' she said. 'So I'll take any of those, whoever they choose.'

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store