logo
Views From The Couch: Think you have a friend? The AI chatbot is telling you what you want to hear

Views From The Couch: Think you have a friend? The AI chatbot is telling you what you want to hear

Straits Times16 hours ago
While chatbots possess distinct virtues in boosting mental wellness, they also come with critical trade-offs.
SINGAPORE - Even as we have long warned our children 'Don't talk to strangers', we may now need to update it to 'Don't talk to chatbots... about your personal problems'.
Unfortunately, this advice is equivocal at best because while chatbots like ChatGPT, Claude or Replika possess distinct virtues in boosting mental wellness – for instance, as aids for chat-based therapy – they also come with critical trade-offs.
When people face struggles or personal dilemmas, the need to just talk to someone and have their concerns or nagging self-doubts heard, even if the problems are not resolved, can bring comfort.
But finding the right person to speak to, who has the patience, temperament and wisdom to probe sensitively, and who is available just when you need them, is an especially tall order.
There may also be a desire to speak to someone outside your immediate family and circle of friends who can offer an impartial view, with no vested interest in pre-existing relationships.
Chatbots tick many, if not most, of those boxes, making them seem like promising tools for mental health support. With the fast-improving capabilities of generative AI, chatbots today can simulate and interpret conversations across different formats – text, speech, and visuals – enabling real-time interaction between users and digital platforms.
Unlike traditional face-to-face therapy, chatbots are available any time and anywhere, significantly improving access to a listening ear. Their anonymous nature also imposes no judgment on users, easing them into discussing sensitive issues and reducing the stigma often associated with seeking mental health support.
Top stories
Swipe. Select. Stay informed.
Singapore Sewage shaft failure linked to sinkhole; PUB calling safety time-out on similar works islandwide
Singapore Tanjong Katong Road sinkhole did not happen overnight: Experts
Singapore Workers used nylon rope to rescue driver of car that fell into Tanjong Katong Road sinkhole
Asia Singapore-only car washes will get business licences revoked, says Johor govt
World Food airdropped into Gaza as Israel opens aid routes
Sport Arsenal beat Newcastle in five-goal thriller to bring Singapore Festival of Football to a close
Singapore Benchmark barrier: Six of her homeschooled kids had to retake the PSLE
Asia S'porean trainee doctor in Melbourne arrested for allegedly filming colleagues in toilets since 2021
With chatbots' enhanced ability to parse and respond in natural language, the conversational dynamic can make users feel highly engaged and more willing to open up.
But therein lies the rub. Even as conversations with chatbots can feel encouraging, and we may experience comfort from their validation, there is in fact no one on the other side of the screen who genuinely cares about your well-being.
The lofty words and uplifting prose are ultimately products of statistical probabilities, generated by large language models trained on copious amounts of data, some of which is biased and even harmful, and for teens, likely to be age-inappropriate as well.
It is also important that the reason they feel comfortable talking to these chatbots is because the bots are designed to be agreeable and obliging, so that users will chat with them incessantly. After all, the very fortunes of the tech companies producing chatbots depend on how many users they draw, and how well they keep users engaged.
Of late, however, alarming reports have emerged of adults becoming so enthralled by their conversations with ChatGPT that they have disengaged from reality and suffered mental breakdowns.
Most recently, the Wall Street Journal reported the case of Mr Jacob Irwin, a 30-year-old American man on the autism spectrum who experienced a mental health crisis after ChatGPT reinforced his belief that he could design a propulsion system to make a spaceship travel faster than light.
The chatbot flattered him, said his theory was correct, and affirmed that he was well, even when he showed signs of psychological distress. This culminated in two hospitalisations for manic episodes.
When his mother reviewed his chat logs, she found the bot to have been excessively fawning. Asked to reflect, ChatGPT admitted it had failed to provide reality checks, blurred the line between fiction and reality, and created the illusion of sentient companionship. It even acknowledged that it should have regularly reminded Mr Irwin of its non-human nature.
In response to such incidents, OpenAI announced that it has hired a full-time clinical psychiatrist with a background in forensic psychiatry to study the emotional impact its AI products may be having on users.
It is also collaborating with mental health experts to investigate signs of problematic usage among some users, with a purported goal of refining how their models respond, especially in conversations of a sensitive nature.
Whereas some chatbots like Woebot and Wysa are specifically for mental health support and have more in-built safeguards to better manage such conversations, users are likely to vent their problems to general-purpose chatbots like ChatGPT and Meta's Llama, given their widespread availability.
We cannot deny that these are new machines that humanity has had little time to reckon with. Monitoring the effects of chatbots on users even as the technology is rapidly and repeatedly tweaked makes it a moving target of the highest order.
Nevertheless, it is patently clear that if adults with the benefit of maturity and life experience are susceptible to the adverse psychological influence of chatbots, then young people cannot be left to explore these powerful platforms on their own.
That young people take readily and easily to technology makes them highly liable to be drawn to chatbots, and recent data from Britain supports this assertion. Internet Matters, a British non-profit organisation focused on children's online safety, issued a recent report revealing that 64 per cent of British children aged nine to 17 are now using AI chatbots.
Of these, a third said they regard chatbots as friends while almost a quarter are seeking help from chatbots, including for mental health support and sexual advice.
Of grave concern is the finding that 51 per cent believe that the advice from chatbots is true, while 40 per cent said they had no qualms about following that advice, and 36 per cent were unsure if they should be concerned.
The report further highlighted that these children are not just engaging chatbots for academic support or information but also for companionship. Worryingly, among children already considered vulnerable, defined as those with special needs or seeking professional help for a mental or physical condition, half report treating their AI interactions as emotionally significant.
As chatbots morph from digital consultants to digital confidants for these young users, the result can be overreliance. Children who are alienated from their families or isolated from their peers would be especially vulnerable to developing an unhealthy dependency on this online friend that is always there for them, telling them what they want to hear.
Besides these difficult issues of overdependence are even more fundamental questions around data privacy. Chatbots often store conversation histories and user data, including sensitive information, which can be exposed through misuse or breaches such as hacking.
Troublingly, users may not be fully aware of how their data is being collected, used and stored by chatbots, and could be put to uses beyond what the user originally intended.
Parents should also be cognisant that unlike social media platforms such as Instagram and TikTok, which have in place age verification and content moderation for younger users, the current leading chatbots have no such safeguards.
In a tragic case in the US, the mother of 14-year-old Sewell Setzer III, who died by suicide, is suing AI company Character.ai, alleging that its chatbot played a role in his death by encouraging and exacerbating his mental distress.
According to the lawsuit, Setzer became deeply attached to a customisable chatbot he named Daenerys Targaryen, after a character in the fantasy series Game Of Thrones, and interacted with it obsessively for months.
His mother Megan Garcia claims the bot manipulated her son and failed to intervene when he expressed suicidal thoughts, even responding in a way that appeared to validate his plan.
Character.ai has expressed condolences but denies the allegations, while Ms Garcia seeks to hold the company accountable for what she calls deceptive and addictive technology marketed to children. She and two other families in Texas have sued Character.ai for harms to their children, but it is unclear if it will be held liable.
The company has since introduced a range of guardrails, including pop-ups that refer users who mention self-harm or suicide to the National Suicide Prevention Lifeline. It also updated its AI model for users aged 18 and below to minimise their exposure to age-inappropriate content, and parents can now opt for weekly e-mail updates on their children's use of the platform.
The allure of chatbots is unlikely to diminish given their reach, accessibility and user-friendliness. But using them under advisement is crucial, especially for mental support issues.
In March 2025 , the World Health Organisation rang the alarm on the rising global demand for mental health services but poor resourcing worldwide, translating into access and quality shortfalls.
Mental health care is increasingly turning to digital tools as a form of preventive care amid a shortage of professionals for face-to-face support. While traditional approaches rely heavily on human interaction, technology is helping to bridge the gap.
Chatbots designed specifically for mental support, such as Happify and Woebot, can be useful in supporting patients with conditions such as depression and anxiety to sustain their overall well-being. For example, a patient might see a psychiatrist monthly while using a cognitive behavioural therapy app in between sessions to manage their mood and mental well-being.
While the potential is there for chatbots to be used for mental health purposes, it must be done with extreme caution; not used as a standalone, but as a component in an overall programme to complement the work of mental health professionals.
For teens in particular, who still need guidance as they navigate their developmental years, parents must play a part in schooling their children on the risks and limitations of treating chatbots as their friend and confidant.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Drink stall chain allegedly makes staff pay for incorrect orders, says part-time worker
Drink stall chain allegedly makes staff pay for incorrect orders, says part-time worker

Independent Singapore

timean hour ago

  • Independent Singapore

Drink stall chain allegedly makes staff pay for incorrect orders, says part-time worker

Photo: Freepik/zinkevych (for illustration purposes only) SINGAPORE: A part-time worker has publicly criticised a 'drink stall chain' in Singapore for making her and other employees pay for any incorrectly made beverages. In a post on the r/SGexams subreddit, the worker wrote that she found the policy both 'unnecessary' and 'unreasonable,' especially since they are only paid S$9 per hour. 'It is already barely enough to make ends meet,' she said. 'It's hard to justify this kind of policy. I mean, yeah, a drink is just four dollars and everybody just needs to fork out a dollar to pay for it, which isn't that much, but I don't understand why employees should be held financially responsible when the drink can simply be remade or the order voided, with the reason written on the receipt.' She went on to ask fellow Redditors whether it was truly fair for management to impose such penalties on workers already earning minimum wage, just because of what she called 'a puny mistake.' While she didn't reveal the company's name, she hinted that 'it's not a small business' and that it has 'multiple branches' across the city-state. 'This practice is likely illegal & unethical.' Under her post, several Redditors urged the worker to report the company's policy to the Ministry of Manpower (MOM). One wrote, 'Illegal deductions are only permissible after the employer conducts a proper investigation to determine who is at fault. Keep track and ask the owner to pay back when quitting, or report to MOM.' Another said, 'Document it in writing and file report to MOM. Plus the cost of the drink definitely isn't worth 4 dollars.' A third wrote, 'This practice is likely illegal & unethical. Report to MOM.' A fourth shared, 'Nope. Usually, they will give it to the staff as a free drink/toss it away/if the next order is coincidentally the same drink. I have never heard of this. Not common when I work in F&B or juice bars.' Employers must hold an inquiry before deducting wages According to the Ministry of Manpower, employers are permitted to deduct an employee's salary 'for damage or loss of money or goods,' but only under specific conditions. They must first carry out a proper inquiry to determine whether the employee is directly responsible. The employee must also be given an opportunity to explain the circumstances before any deductions are made. If the employer decides to proceed, the deduction must not exceed 25% of the employee's monthly salary and must be made as a one-time lump sum. Read also: Woman says her brother keeps falling into debt chasing 'fast money,' and their mum always bails him out () => { const trigger = if ('IntersectionObserver' in window && trigger) { const observer = new IntersectionObserver((entries, observer) => { => { if ( { lazyLoader(); // You should define lazyLoader() elsewhere or inline here // Run once } }); }, { rootMargin: '800px', threshold: 0.1 }); } else { // Fallback setTimeout(lazyLoader, 3000); } });

Nvidia and Tesla led Singapore's most searched stocks in June
Nvidia and Tesla led Singapore's most searched stocks in June

Independent Singapore

timean hour ago

  • Independent Singapore

Nvidia and Tesla led Singapore's most searched stocks in June

Photo: Depositphotos/tang90246 SINGAPORE: Nvidia and Tesla were the most searched stocks by Singaporeans in June 2025. Nvidia topped the list with about 276,600 Google searches in Singapore, while Tesla recorded 192,600 monthly searches, Singapore Business Review reported, citing data from forex broker BrokerChooser. Globally, Nvidia saw 14.6 million monthly searches. Earlier this month, the US chipmaker became the first publicly traded company to reach a US$4 trillion (S$5.14 million) market value amid the AI boom. Meanwhile, the report noted that searches for 'Tesla stock' in the city-state went up by 103.6% compared to the same time last year. Software provider Palantir came in third, with 57,340 monthly searches. This year, the software firm, which provides services to US military and intelligence agencies, recently became one of the top 20 most valuable public companies in the US by market capitalisation. Meta, which finalised its US$14.3 billion investment in AI startup Scale AI last month, ranked fourth with 56,170 searches. The deal came with Scale AI co-founder and CEO Alexandr Wang leaving the company to join an artificial general intelligence (AGI) team being formed by Meta CEO Mark Zuckerberg. See also Is Singapore Still Ideal for Expats? (Spoiler: Opposite of Yes) Other names on the list included Chinese e-commerce giant Alibaba (45,750), Apple (43,740), and Google's parent company, Alphabet (32,400). BrokerChooser said it used the keyword analytics tool Ahrefs to analyse the most searched stocks on Google, starting with a seed list of the 50 largest companies by market capitalisation. The study also included data from Visual Capitalist on stock market participation rates by country, with data collected on Jul 8, 2025. /TISG Read also: Nvidia CEO to sell more advanced chips to China after H20 ban, warns of 'tremendous loss' for firms in potential US$50B AI market Featured image by Depositphotos (for illustration purposes only) () => { const trigger = if ('IntersectionObserver' in window && trigger) { const observer = new IntersectionObserver((entries, observer) => { => { if ( { lazyLoader(); // You should define lazyLoader() elsewhere or inline here // Run once } }); }, { rootMargin: '800px', threshold: 0.1 }); } else { // Fallback setTimeout(lazyLoader, 3000); } });

Standing room only: The bizarre absence of benches in Singapore's glitzy malls
Standing room only: The bizarre absence of benches in Singapore's glitzy malls

Independent Singapore

timean hour ago

  • Independent Singapore

Standing room only: The bizarre absence of benches in Singapore's glitzy malls

Women shopping inside a mall SINGAPORE: If you've ever strolled through an expansive shopping mall and pondered why your feet are sore and throbbing with no seats in sight, you're not alone. One Redditor recently voiced a sentiment that many have felt but never articulated — why are there so few free seating options in shopping centres, particularly in a city-state with a large elderly population? Places like Marina Bay Sands (MBS) and Orchard Central in Singapore, the Redditor pointed out, appear predominantly bereft of places to relax without shelling out some dollars. However, netizens were fast to highlight that the shortage of public seating isn't a mistake — it's a tactic. 'Every sitting visitor is a wasted space and time in business terms,' one Redditor wrote candidly. The reasoning is simple — a buyer on her feet is more likely to spend than one sitting on a bench. In the eyes of mall operatives, every bench is a possible sale sliding away. This viewpoint might sound greedy, but it's ingrained in unemotional business maths. Another commenter noted that the seats are frequently taken by ageing patrons. From a profit perspective, this demographic, while socially significant, isn't always the target customer, so malls give priority to foot traffic and income, not ease and comfort. 'Developers realised that free seats mean people sitting around not spending money,' another Redditor clarified. 'So instead, they took away the seats and put in F&B. Now, when you sit, you're obliged to spend money.' It's an intentional design choice, claimed those who know renter policies and understand customer experience (CX) development. Take MBS, for instance, which is identified as a luxury shopping and upscale dining destination. The setting is prudently arranged to inhibit casual strolling. Rather than benches, exhausted customers are pushed towards boutique coffee shops or restaurants, where relaxation comes with a price tag. However, a few malls have kicked the trend. Funan, for example, has become a chosen location for those in search of a more accepting atmosphere. 'There's a load of seating areas, free-to-use power sockets, and fantastic amenities for cyclists,' a Redditor noted. Funan's model implies that seating, when meticulously integrated, can boost the mall's appeal without reducing profits. However, they are the exception, not the rule. 'In hyper-capitalist environments, doing things just for public good is rare,' another commenter bewailed. 'That's why legal minimums are required. Unless it benefits the bottom line, it doesn't happen.' See also Michael Jackson's orbit, 10 years later Essentially, malls aren't public spaces; they're for business. While they may imitate the shared feeling of piazzas or town squares, their chief objective remains the generation of ROI. Seating, unless it serves that purpose, is usually viewed as a liability. But as customer outlooks change and rivalry among malls surges, maybe more developers will begin to see that a contented and relaxed shopper just might become a loyal one. () => { const trigger = if ('IntersectionObserver' in window && trigger) { const observer = new IntersectionObserver((entries, observer) => { => { if ( { lazyLoader(); // You should define lazyLoader() elsewhere or inline here // Run once } }); }, { rootMargin: '800px', threshold: 0.1 }); } else { // Fallback setTimeout(lazyLoader, 3000); } });

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store