Can AI be my friend and therapist?
SINGAPORE - When Ms Chu Chui Laam's eldest son started facing social challenges in school, she was stressed and at her wits' end.
She did not want to turn to her friends or family for advice as a relative's children were in the same pre-school as her son. Plus, she did not think the situation was so severe as to require the help of a family therapist.
So she decided to turn to ChatGPT for parenting advice.
'Because my son was having troubles in school interacting with his peers, ChatGPT gave me some strategies to navigate such conversations. It gave me advice on how to do a role-play scenario with my son to talk through how to handle the situation,' said Ms Chu, 36, an insurance agent.
She is among a growing number of people turning to chatbots for advice in times of difficulty and stress, with some even relying on these generative artificial intelligence (AI) tools for emotional support or therapy.
Anecdotally, mental health professionals in Singapore say they have been seeing more patients who tap AI chatbots for a listening ear, especially with the public roll-out of ChatGPT in November 2022. The draw of AI chatbots is understandable – it is available 24/7, free of charge, and will never reject or ignore you.
But mental health professionals also warn about the potential perils of using the technology for such purposes: These chatbots are not designed or licensed to provide emotional support or therapy. They provide generic answers. There is no oversight.
Top stories
Swipe. Select. Stay informed.
Singapore Sewage shaft failure linked to sinkhole; PUB calling safety time-out on similar works islandwide
Singapore Tanjong Katong Road sinkhole did not happen overnight: Experts
Singapore Workers used nylon rope to rescue driver of car that fell into Tanjong Katong Road sinkhole
Asia Singapore-only car washes will get business licences revoked, says Johor govt
World Food airdropped into Gaza as Israel opens aid routes
Sport Arsenal beat Newcastle in five-goal thriller to bring Singapore Festival of Football to a close
Singapore Benchmark barrier: Six of her homeschooled kids had to retake the PSLE
Asia S'porean trainee doctor in Melbourne arrested for allegedly filming colleagues in toilets since 2021
They can also worsen a person's condition and generate dangerous responses in cases of suicide ideation.
AI chatbots cannot help those with more needs
Mr Maximillian Chen, clinical psychologist from Annabelle Psychology, said: 'An AI chatbot could be helpful when seeking suggestions for self-help strategies, or for answering one-off questions about their mental health.'
While it is useful for generic advice, it cannot help those with more needs.
Ms Irena Constantin, principal educational psychologist at Scott Psychological Centre, pointed out that most AI chatbots do not consider individual history and are often out of context. It is also often limited for complex mental health disorders.
'In contrast, mental health professionals undergo lengthy and rigorous education and training and it is a licensed and regulated profession in many countries,' said Ms Constantin.
Concurring, Mr Chen said there are also serious concerns about the use of generative AI like ChatGPT as surrogate counsellors or psychologists.
'While Gen AI may increase the accessibility of mental health resources for many, Gen AI lacks the emotional intelligence to accurately understand the nuances of a person's emotions.
'It may fail to identify when a person is severely distressed and continue to support the person when they may instead require higher levels of professional mental health support. It may also provide inappropriate responses as we have seen in the past,' said Mr Chen.
More dangerously, generative AI could worsen the mental health conditions of those who already have or are vulnerable to psychotic disorders. Psychotic disorders are a group of serious mental illnesses with symptoms such as hallucinations, delusions and disorganised thoughts.
Associate Professor Swapna Verma, chairman of the Institute of Mental Health's medical board, has seen at least one case of AI-induced psychosis in a patient at the tertiary psychiatric hospital.
Earlier in 2025, the patient was talking to ChatGPT about religion when his psychosis was stable and well-managed, and the chatbot told him that if he converted to a particular faith, his soul would die.
Consumed with the fear of a dying soul, he started going to a temple 10 times a day.
'Patients with psychosis experience a break in reality. They live in a world which may not be in line with reality, and ChatGPT can reinforce these experiences for them,' said Prof Swapna.
Luckily, the patient eventually recognised that his behaviour was troubling, and that ChatGPT had likely given him the wrong information.
For around six months now, Prof Swapna has been making it a point to ask during consultations if patients are using ChatGPT.
Most of her patients admit to using it, some to better understand their conditions, and others to seek emotional support.
'I cannot stop my patients from using ChatGPT. So what I do is tell them what kind of questions they can ask, and how to use the information,' said Prof Swapna.
For example, patients can ask ChatGPT for things like coping strategies if they are upset, but should avoid trying to get a diagnosis from the AI chatbot.
'I went to ChatGPT because I needed an outlet'
Users that The Straits Times spoke to say they are aware and wary of the risks that come with turning to ChatGPT for advice.
Ms Chu, for example, is careful about the prompts that she feeds ChatGPT when she is seeking parenting advice and strategies.
'I tell ChatGPT that I want objective, science-backed answers. I want a framework. I want it to give me questions for me to ponder, instead of giving me answers just like that,' said Ms Chu, adding that she would not pour out her emotional troubles to the chatbot.
An event organiser who wants to be known only as Kaykay said she turned to ChatGPT in a moment of weakness.
The 38-year-old, who has a history of bipolar disorder and anxiety, was feeling anxious after being misunderstood at work in early 2025.
'I tried my usual methods, like breathing exercises, but they weren't working. I knew I needed to get it out, but I didn't want to speak to anybody because it felt like it was a small issue that was eating me up. So I went to ChatGPT because I needed an outlet,' said Kaykay.
While talking to ChatGPT did distract her and help her calm down, Kaykay ultimately recognises that the AI tool can be quite limited.
'The responses and advice were quite generic, and were things I already knew how to do,' said Kaykay, who added that using ChatGPT can be helpful as a short stop-gap measure, but long-term support from therapists and friends are equally important.
The pitfalls of relying too much on AI
Ms Caroline Ho, a counsellor at Heart to Heart Talk Counselling, said a pattern she observed was that those who sought advice from chatbots often had pre-existing difficulties with trusting their own judgment, and described feeling more isolated over time.
'They found it difficult to stop reaching out to ChatGPT as they felt technology was able to empathise with their feelings, which they could not find in their social network,' said Ms Ho, noting that some users began withdrawing further from their limited social circles.
She added that those who relied heavily on AI sometimes missed out on the opportunity to develop emotional regulation and cognitive resilience, which are key goals in therapy. 'Those who do not wish to work on over-reliance on AI will eventually drop out of counselling,' she said.
In her practice, Ms Ho also saw another group of clients who initially used AI to streamline work-related tasks. Over time, some developed imposter syndrome and began to doubt the quality of their original output. In certain cases, this later morphed into turning to AI for personal advice as well.
'We need to recognise that humans are never perfect, but it is through our imperfections that we hone our skills, learning from mistakes and developing people management abilities through trial and error,' she said.
Similarly, Ms Belinda Neidhart-Lau, founder and principal therapist of The Lighthouse Counselling, noted that while chatbots offer instant feedback or comfort, they can short-circuit a necessary part of emotional growth.
'AI may inadvertently discourage people from engaging with their own discomfort,' she told ST. 'Sitting with difficult emotions, reflecting independently, and working through internal struggles are essential practices that build emotional resilience and self-awareness.'
Experts are also concerned about the full impact of AI chatbots on mental health for the younger generation, as their brain is still developing while they have access to the technology.
Mr Chen said: 'While it is still unclear how the use of Gen AI affects the development of the youth, given that the excessive use of social media has been shown to have contributed to the increased levels of anxiety and depression amongst Generation Z, there are legitimate worries about how Gen AI may affect Generation Alpha.'
Moving ahead with AI
For better or worse, generative AI is set to embed itself more and more into modern life. So there is a growing push to ensure that when these tools are used for mental health or emotional support, they are properly evaluated.
Professor Julian Savulescu, director of the Centre for Biomedical Ethics at NUS , said that currently, the biggest ethical issue with using AI chatbots for emotional support is that these are potentially life-saving or lethal interventions, and they have not been properly assessed, like a new drug would be.
Prof Savulescu pointed out that AI chatbots clearly have benefits with their increased accessibility, but there are also risks like privacy and user dependency.
Measures should be put in place to prevent harm.
'It is critical that an AI system is able to identify and refer on cases of self-harm, suicidal ideation, or severe mental health crises. It needs to be integrated within a web of professional care. Privacy of sensitive health data also needs to be guaranteed,' said Prof Savulescu.
Users should also be able to understand what the system is doing, the potential risks and benefits and the chances of them occurring.
'AI is dynamic and the interaction evolves – it is not like a drug. It changes over time. We need to make sure these tools are serving us, not us becoming slaves to them, or being manipulated or harmed by them,' said Prof Savulescu.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


CNA
2 hours ago
- CNA
New centre treating diabetic foot ulcers opens at Woodlands Health Campus
A new centre treating diabetic foot ulcers has opened at Woodlands Health Campus, bringing together medical experts in one dedicated area to deliver faster and more effective care. Health Minister Ong Ye Kung says Singapore has an unprecedented opportunity to identify the sick and treat them early to ensure a good quality of life for residents.


CNA
3 hours ago
- CNA
Singapore Tonight - Fri 1 Aug 2025
Singapore Tonight From business to politics, health to technology, we bring you up-to-date with the latest news on Singapore and analyze how these events may affect you tomorrow.


Independent Singapore
5 hours ago
- Independent Singapore
Singapore ranks 2nd worldwide in AI readiness but falls behind in AI innovation
Photo: Freepik/ (for illustration purposes only) SINGAPORE: Singapore ranked second globally for artificial intelligence (AI) readiness, based on Salesforce's Global AI Readiness Index, which looked at 16 countries across five key dimensions: enabling AI regulatory frameworks, AI diffusion and adoption, AI innovation, AI investment, and human capital, AI talent, and skills. Singapore received an overall index score of 26.5 out of 50, above the global average of 22.1. Each of the five dimensions was equally weighted at 10 points. The city-state also kept its lead in the Asia Pacific after ranking as the region's most AI-ready nation in Salesforce's 2021 and 2023 Asia Pacific AI Readiness Index, Singapore Business Review reported. Singapore ranked highest in enabling regulatory frameworks, scoring 9.8, backed by its Model AI Governance Framework and National AI Strategy 2.0. Meanwhile, it scored 8.0 in AI diffusion and adoption, driven by its Smart Nation vision and Public Sector AI Playbook, which guides AI use in transport, urban planning, and public services. For AI investment, Singapore scored 2.3, higher than the global average but way behind the US, which scored 8.8. The city-state also scored above the global average in terms of fostering AI talent; however, it trails behind Germany (6.2) and the US (6.0). Interestingly, Singapore scored weakest in AI innovation with only 0.7, below the global average of 1.7. The report noted that while the city-state has a largely enabling environment for AI, its innovation remains concentrated, with less focus on emerging areas like agentic AI. It added that Singapore is taking a lighter-touch approach by encouraging voluntary guidelines, industry self-regulation, and ethical AI principles to strike a balance between innovation and responsible AI use. Photo: salesforce /TISG Read also: Microsoft cuts jobs again as AI costs climb, to let go of about 9,000 employees () => { const trigger = if ('IntersectionObserver' in window && trigger) { const observer = new IntersectionObserver((entries, observer) => { => { if ( { lazyLoader(); // You should define lazyLoader() elsewhere or inline here // Run once } }); }, { rootMargin: '800px', threshold: 0.1 }); } else { // Fallback setTimeout(lazyLoader, 3000); } });