logo
'AI friends': Automating empathy

'AI friends': Automating empathy

The Star24-05-2025

YOUNG people are always connected 24/7 nowadays – scrolling through social media feeds, liking posts, sharing their opinions, posting details of their lives, jumping into group chats, rumbling in multiplayer games, and even making new friends online.
You might think this would provide social fulfilment, but the truth is all these digital interactions hide a deeper sense of isolation among the youth.
In a survey conducted by Gallup and Meta between 2022 and 2023 in 142 countries, they found that approximately one in four young people below the age of 30 reported feeling fairly lonely or very lonely.
Perhaps surprisingly, that same survey found older adults aged 65 and above reported feeling lonely at a much lower rate – 10% lower. This is despite all age groups reporting feeling high social connectedness above 70%.
Within Malaysia, the Health Ministry's National Health and Morbidity Survey 2022 also pointed to a rising trend in loneliness among secondary school students, rising sharply from 8.1% to 16.2% within 10 years.
This is a worrying trend says founder of mental health portal Relate Malaysia and clinical psychologist Dr Chua Sook Ning because loneliness often goes unrecognised as a serious issue, as it is overshadowed by more commonly discussed mental health concerns.
But amid this growing emotional void, a new kind of support is emerging: chatbots driven by artificial intelligence.
Meta owner Mark Zuckerberg has even pitched a future where we all have 'AI friends' to help with loneliness – and this future is already unfolding in Malaysia.
In recent research, Universiti Malaysia Sarawak senior lecturer Dr Chuah Kee Man says his team found that nearly 90% of their respondents, particularly those from Gen Z, regularly used AI tools such as ChatGPT and Google's Gemini AI to share their personal challenges.
However, while experts say such tools may provide short-term relief for those seeking to ease loneliness, they may risk deepening isolation by replacing genuine human connection and delaying help from professionals.
AI mates versus human buddies
Do you know who your kids are talking to online? — 123rf
Dr Azree Nazri, head of laboratory at the Institute of Mathematical Research at Universiti Putra Malaysia, says several studies have indicated that AI chatbots are providing some sense of emotional relief in users.
For example, he says, a 2023 study in South Korea involving college students found users of AI-based wellness apps reported reductions in loneliness and depression over a four-month period with qualitative feedback revealing a high level of user satisfaction and emotional support.
Even more intriguing, he adds, are studies from last year that have found users consistently underestimated how emotionally impactful their interactions with AI companions would be.
'The data suggested that these digital interactions provided emotional relief comparable in some cases to real human support, particularly in moments of vulnerability,' he says.
Chuah, who specialises in educational technology, instructional design, and computational linguistics, says AI has already shown its value as a companion for someone who is anxious or simply seeking a listening ear.
'I personally think AI friends may be effective for alleviating short-term loneliness or providing a safe space for social rehearsal.'
However, he says while AI can simulate certain aspects of companionship like offering empathy or personalised responses, it's still a long way from truly replicating the richness of human relationships.
'Real friendships are built on shared experiences and an ever-evolving cycle of emotion and trust. So AI can play a supportive role but it cannot yet replace the fundamental need for genuine human connection,' he says.
Pointing to results from his recent research, he says while 90% of respondents said they regularly used AI tools for emotional support, all of them said no when asked if AI could replace human friends.
The issue, he says, is authenticity because while AI can mimic empathy, it lacks true self-awareness and lived experience.
Users can sense this 'emotional gap' over time, which could paradoxically lead to deeper feelings of isolation, he says.
This is backed by a recent studies from the Massachusetts Institute of Technology, says Chua, who explains the studies found that higher daily engagement with AI, whether for personal or non-personal conversations, was associated with increased loneliness, greater dependence on the technology, more problematic usage patterns, and reduced social interaction in real life.
'While AI might provide short-term comfort or convenience, it risks deepening the very isolation proponents claim it eases by pulling people further away from meaningful human connection,' she says.
Speaking about the same studies, Azree says this suggests that when overused, AI can discourage deeper engagement in human relationships.
He says the four-week longitudinal study involving nearly 1,000 participants revealed a direct correlation between daily AI chatbot usage and reduced real-world social interaction, alongside rising emotional dependence.
'These findings underline a critical point: AI companions can act as emotional buffers, especially for those lacking human support, but they should be regarded as supplemental, not substitutive, tools for mental and emotional well-being.'
Build better communities, not better chatbots
Chuah also expresses concern that those who spend more time chatting with AI chatbots rather than real humans may end up not knowing how to interact with people in real life or end up blurring the line between reality and simulation.
'Also, while AI can offer support, it is not a substitute for professional help. Users experiencing severe depression or other mental health issues might delay seeking real assistance, relying on AI instead,' Chuah says.
For computer science expert Emeritus Prof Datuk Tengku Mohd Tengku Sembok, the question of AI friends isn't simply about whether they can effectively address loneliness – it also opens the door to serious ethical and spiritual questions.
'Loneliness is not merely a psychological void; it is a spiritual ailment rooted in disconnection from both community and the Creator,' he says.
Religious teachings consider caring for the lonely a collective obligation, he explains, and thus outsourcing this duty to machines may dehumanise society and shift moral responsibility from people to machines.
'Are we allowing AI to replace what should be human acts of duty?' he questions.
More practically, he says, there are also significant risks of data exploitation as these 'AI friends' inevitably collect emotional and behavioural data.
'If commercial interests drive such platforms, the vulnerable may become commodities.
'Any system that pretends to be your friend while mining your feelings must be scrutinised,' he says.
That is not to say he completely rejects AI; instead, Prof Tengku Mohd says it is a call to humanise the use of AI by using it as a guide towards real help rather than replacing genuine community-building.
'In the end, addressing loneliness is not about building better chatbots. It is about building better communities. And that is something no machine can do better than us.'

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

US judge: AI training on books is fair use, partly
US judge: AI training on books is fair use, partly

The Sun

time5 hours ago

  • The Sun

US judge: AI training on books is fair use, partly

SAN FRANCISCO (United States): A US federal judge has sided with Anthropic regarding training its artificial intelligence models on copyrighted books without authors' permission, a decision with the potential to set a major legal precedent in AI deployment. District Court Judge William Alsup ruled on Monday that the company's training of its Claude AI models with books bought or pirated was allowed under the 'fair use' doctrine in the US Copyright Act. 'Use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use,' Alsup wrote in his decision. 'The technology at issue was among the most transformative many of us will see in our lifetimes,' Alsup added in his 32-page decision, comparing AI training to how humans learn by reading books. Tremendous amounts of data are needed to train large language models powering generative AI. Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment. AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally transforms the original content and is necessary for innovation. 'We are pleased that the court recognized that using 'works to train LLMs was transformative,'' an Anthropic spokesperson said in response to an AFP query. The judge's decision is 'consistent with copyright's purpose in enabling creativity and fostering scientific progress,' the spokesperson added. - Blanket protection rejected - The ruling stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train Claude, the company's AI chatbot that rivals ChatGPT. However, Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections. Along with downloading books from websites offering pirated works, Anthropic bought copyrighted books, scanned the pages and stored them in digital formats, according to court documents. Anthropic's aim was to amass a library of 'all the books in the world' for training AI models on content as deemed fit, the judge said in his ruling. While training AI models on the pirated content posed no legal violation, downloading pirated copies to build a general-purpose library constituted copyright infringement, the judge ruled, regardless of eventual training use. The case will now proceed to trial on damages related to the pirated library copies, with potential penalties including financial damages. Anthropic said it disagreed with going to trial on this part of the decision and was evaluating its legal options. 'Judge Alsup's decision is a mixed bag,' said Keith Kupferschmid, chief executive of US nonprofit Copyright Alliance. 'In some instances AI companies should be happy with the decision and in other instances copyright owners should be happy.' Valued at $61.5 billion and heavily backed by Amazon, Anthropic was founded in 2021 by former OpenAI executives. The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development.

It's still 'too soon' to say how AI will affect jobs, researchers say
It's still 'too soon' to say how AI will affect jobs, researchers say

The Star

time6 hours ago

  • The Star

It's still 'too soon' to say how AI will affect jobs, researchers say

BERLIN: Using artificial intelligence at work has not caused any discernible damage to employees' mental health or job satisfaction, according to researchers based in Germany, Italy and the US, who nonetheless warn that it is probably "way too soon to draw definitive conclusions" about its effects on jobs. "So far, we find little evidence that AI adoption has undermined workers' well-being on average," said Luca Stella of the University of Milan and the Berlin School of Economics. Alarm has been growing over companies' increasingly enthusiastic deployment of chatbots and their possible impact on employment if workers are rendered obsolete by software and machines – and if so-called humanoid robots with AI are mass-produced. "Public anxiety about AI is real, but the worst-case scenarios are not inevitable," Stella said. The team cautioned that their work focused on Germany, where AI adoption appears to be behind other countries but where labour laws and rights are arguably stronger. Published by Nature Scientific Reports, Stella and colleagues' research found that there may be a link between using AI and reported "modest improvements in worker physical health" - a trend that seems to "particularly" be the case for people without a university degree. The reported improvement in some workers' physical health is likely down to "declining job physical intensity and overall job risk in some of the AI-exposed occupations," the team said. But the team's journal article says their work used "longitudinal survey data from Germany (2000–2020)," meaning it covered years before the widespread availability of so-called "generative" AI such as ChatGPT, starting in late 2022. A recent survey by Gallup has found AI use at work to have doubled in the US over the past two years, with uptake highest among so-called white-collar employers, with technology professional services and finance seeing the highest uptakes. "As AI adoption accelerates, continued monitoring of its broader impacts on work and health is essential," said Osea Giuntella of the University of Pittsburgh, who asserted that "technology alone" is not what will decide how AI affects jobs. "Institutions and policies will decide whether AI enhances or erodes the conditions of work," Guintella said. – dpa/Tribune News Service

Anthropic wins ruling on AI training in copyright lawsuit but must face trial on pirated books
Anthropic wins ruling on AI training in copyright lawsuit but must face trial on pirated books

The Star

time9 hours ago

  • The Star

Anthropic wins ruling on AI training in copyright lawsuit but must face trial on pirated books

The Anthropic website and mobile phone app are shown in this photo, in New York, July 5, 2024. — AP Photo/Richard Drew, File In a test case for the artificial intelligence industry, a federal judge has ruled that AI company Anthropic didn't break the law by training its chatbot Claude on millions of copyrighted books. But the company is still on the hook and must now go to trial over how it acquired those books by downloading them from online "shadow libraries' of pirated copies. US District Judge William Alsup of San Francisco said in a ruling filed late Monday that the AI system's distilling from thousands of written works to be able to produce its own passages of text qualified as "fair use' under US copyright law because it was "quintessentially transformative.' "Like any reader aspiring to be a writer, Anthropic's (AI large language models) trained upon works not to race ahead and replicate or supplant them – but to turn a hard corner and create something different,' Alsup wrote. But while dismissing a key claim made by the group of authors who sued the company for copyright infringement last year, Alsup also said Anthropic must still go to trial in December over its alleged theft of their works. "Anthropic had no entitlement to use pirated copies for its central library,' Alsup wrote. A trio of writers – Andrea Bartz, Charles Graeber and Kirk Wallace Johnson – alleged in their lawsuit last summer that Anthropic's practices amounted to "large-scale theft," and that the San Francisco-based company "seeks to profit from strip-mining the human expression and ingenuity behind each one of those works.' Books are known to be important sources of the data – in essence, billions of words carefully strung together – that are needed to build large language models. In the race to outdo each other in developing the most advanced AI chatbots, a number of tech companies have turned to online repositories of stolen books that they can get for free. Documents disclosed in San Francisco's federal court showed Anthropic employees' internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitised books that successfully weathered years of copyright battles With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitised versions into its AI model, according to court documents. But that didn't undo the earlier piracy, according to the judge. "That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft but it may affect the extent of statutory damages,' Alsup wrote. The ruling could set a precedent for similar lawsuits that have piled up against Anthropic competitor OpenAI, maker of ChatGPT, as well as against Meta Platforms, the parent company of Facebook and Instagram. Anthropic – founded by ex-OpenAI leaders in 2021 – has marketed itself as the more responsible and safety-focused developer of generative AI models that can compose emails, summarise documents and interact with people in a natural way. But the lawsuit filed last year alleged that Anthropic's actions "have made a mockery of its lofty goals' by building its AI product on pirated writings. Anthropic said Tuesday it was pleased that the judge recognised that AI training was transformative and consistent with "copyright's purpose in enabling creativity and fostering scientific progress.' Its statement didn't address the piracy claims. The authors' attorneys declined comment. – AP

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store