Can AI chatbots replace human therapists? Unpacking the mental health revolution
Image: Ron
Artificial intelligence (AI) is no longer a futuristic concept it's here, embedded in our daily lives, reshaping how we work, connect, and even care for our mental health.
But can a chatbot like ChatGPT truly replace a human therapist?
The question might sound far-fetched, but as the use of AI in mental health continues to grow, this debate has become urgent and deeply personal.
To explore this, I turned to Cassie Chambers, operations director at the South African Depression and Anxiety Group (SADAG), who offered thoughtful insights into this complex conversation.
Let's dive into the pros, cons and the bigger picture of AI's role in mental health support.
What AI can and can't offer
AI tools like ChatGPT are undeniably convenient. Available 24/7, stigma-free, and offering instant responses, they're a lifeline for people seeking immediate support.
'AI can simulate conversations, suggest coping techniques, and even provide resources like breathing exercises or links to helpful videos," Chambers explains.
"But it cannot replicate the deep empathy, compassion, and authentic human connection that come from a skilled therapist.'
Human therapists bring something irreplaceable: the ability to read subtle cues like tone, body language and even those heavy pauses that convey unspoken emotions.
'Therapists rely on intuition, warmth, and their own lived experiences, Chambers says. "This creates a healing relationship built on trust, shared humanity and vulnerability, something no algorithm can fully replicate.'
The wake-up call for traditional therapy
AI's growing popularity highlights gaps in the traditional mental health care model. People want flexibility, affordability, and immediate support qualities often missing in conventional therapy.
'When someone is in crisis, they can't always wait weeks for an appointment,' Chambers notes. 'Traditional models need to evolve to meet these changing demands.'
AI offers an accessible, user-friendly option. It's as simple as opening an app or sending a message, making support available when and where people need it most. SADAG, for example, has embraced a hybrid approach, offering both human-led support groups and digital tools to reach people on their terms.
Therapists rely on intuition, warmth, and their own lived experiences. This creates a healing relationship built on trust, shared humanity, and vulnerability something no algorithm can fully replicate.
Image: Pexels
The bigger question: What does this say about us?
Perhaps the most thought-provoking aspect of this debate isn't about AI replacing therapists, but what it reveals about human connection today.
Chambers reflects on how many people feel more heard and understood by AI than in their real-life relationships. She says, 'This is deeply telling. It shows how much we struggle to find safe spaces where we feel free to open up and be vulnerable.'
The pandemic only deepened this disconnect.
'Covid-19 disrupted our ability to communicate and connect," Chambers explains.
Young people, in particular, have struggled to rebuild those skills. It's like that saying: You can be in a room full of people and still feel utterly alone. While AI chatbots can simulate empathy and active listening, they can't replace the mutual connection and shared humanity of real relationships.
Research during Covid highlighted the power of a simple phone call with a loved one, which could boost mood and mental well-being as effectively as therapy.
AI can access vast amounts of data, providing insights into therapy techniques, psychological models and case studies. However, the human touch remains irreplaceable for complex issues such as trauma, addiction or depression.
Image: Ron
Can AI handle complex mental health issues?
AI excels in accessing vast amounts of data, offering insights into therapy techniques, psychological models, and case studies. But when it comes to complex issues like trauma, addiction, or depression, the human touch remains irreplaceable.
'These cases require nuanced, moment-to-moment intuition shaped by personal history, cultural context, and emotions,' Chambers explains.
"A therapist's ability to adapt their approach to each person's unique story is something AI can't replicate.'
That said, AI can complement human therapists by streamlining their work and handling routine tasks like screenings or resource sharing, freeing up therapists to focus on deeper, more complex care.
Emotional bonds with AI: A growing concern
One troubling trend is the emotional attachment some users form with AI. Vulnerable individuals, particularly young people, have reported naming their AI companions and even building what feels like romantic relationships.
These attachments, while understandable, can lead to heartbreak and even harm when users realise the relationship isn't real.
'This is why critical thinking and education are so important,' Chambers stresses.
"AI can be a supportive tool, but it should never replace professional care. Just like you wouldn't trust Google to treat cancer or diabetes, you shouldn't rely solely on AI for mental health.'
So, where do we go from here?
Chambers envisions a collaborative future where AI and human therapists work together.
'AI can handle the basic screenings, psychoeducation, and routine check-ins while human therapists focus on the deeper work of healing, she says. "This partnership could make mental health care more accessible without losing the deeply human aspect that's essential for true healing.'
However, the risk is that people might see AI as 'good enough' and stop seeking human connection altogether.
'We must ensure that humans remain at the heart of mental health care, Chambers emphasises. Post-Covid, we've learned that humans need humans. No machine can replace the profound impact of genuine human connection.'
The rise of AI in mental health is a wake-up call not just for therapists, but for all of us.
It forces us to examine how we connect and how we can do better. While AI can be a powerful tool, it's not a solution. It's a supplement, a first step, but never the whole journey.
As we navigate this new digital age, we must prioritise fostering compassionate, accepting relationships in our homes, workplaces and communities.
At the end of the day, no machine can ever make us feel as seen, heard, and valued as another human being can.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

IOL News
8 hours ago
- IOL News
Botox treatments in South Africa: what you need to know about safety
Botox is widely used as an anti-ageing treatment, but there are risks if patients opt for budget deals with unregistered clinics. Image: Cottonbro studio / Pexels A South African cosmetic treatment expert says there's no need for panic here following global concerns over a recent case in the United States where a woman was left partially paralysed after having routine botulinum toxin anti-ageing injections, commonly known as Botox. And while the mother of three survived from what doctors later said was a mild stroke, it set off warning bells among thousands of people who are increasingly turning to the treatment that targets facial muscles, tightens and rejuvenates the skin. Dr J D Erasmus, a general practitioner with a special interest in aesthetics and the Head of Aesthetics at the Longevity Centre at RXME, says fears of botulism from Botox injections are largely misplaced. Dr J D Erasmus says South Africans need to be more discerning when choosing where to have cosmetic treatments. Image: Supplied Video Player is loading. Play Video Play Unmute Current Time 0:00 / Duration -:- Loaded : 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time - 0:00 This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color White Black Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Background Color Black White Red Green Blue Yellow Magenta Cyan Transparency Opaque Semi-Transparent Transparent Window Color Black White Red Green Blue Yellow Magenta Cyan Transparency Transparent Semi-Transparent Opaque Font Size 50% 75% 100% 125% 150% 175% 200% 300% 400% Text Edge Style None Raised Depressed Uniform Dropshadow Font Family Proportional Sans-Serif Monospace Sans-Serif Proportional Serif Monospace Serif Casual Script Small Caps Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Advertisement Next Stay Close ✕ 'People hear the word 'toxin' and understandably get nervous. But iatrogenic botulism (caused by a medical treatment) after aesthetic treatments is extremely rare in medical clinics with qualified doctors who follow proper protocols and only inject trusted products from regulated manufacturers,' he says. He adds that botulinum toxin is one of the most extensively studied anti-ageing treatments in aesthetic medicine. In addition to smoothing lines and wrinkles, the treatment is also known to relieve tension headaches. 'In cosmetic treatments, it's used in very small, highly controlled doses by trained professionals. When done correctly, it stays where it's injected, does its job, and your body naturally breaks it down over time. "The horror stories we hear are almost always linked to unqualified practitioners, dangerously high doses, and fake or unregistered products, and are not the work of reputable doctors in South Africa.' In cases of iatrogenic botulism from botulinum toxin injections, the toxin moves further than the treatment area and causes muscle weakness, which is an unusual complication rarely seen in aesthetic injections, he says. 'Globally, thousands of people undergo botulinum toxin procedures safely every day. According to our research, there have not been any recorded cases linked to aesthetic treatments in South Africa. 'Overseas, however, the risks increase where regulations are weaker. "In one extreme case in Turkey two years ago, 71 people developed iatrogenic botulism from botulinum toxin injections in just over a month, and 66 of those cases were traced to the same clinic." Erasmus says this latest scare highlights the dangers associated with medical or cosmetic tourism that's poorly regulated. 'You don't always know what product is being used, who is injecting it, or whether proper safety standards are being followed, and you can't always follow up with your practitioner if you have any concerns after the procedure. 'These types of incidents are important reminders to take your health seriously and choose a clinic you can trust. Before booking a treatment, check your practitioner's qualifications and reputation and make sure that the facility follows stringent product quality and safety regulations.'

The Star
10 hours ago
- The Star
From chatbot to hospital bed: how a dash of salt became a dangerous AI prescription
'Hey ChatGPT, how do I eat healthier salt?' It sounds innocent enough, a question many of us might ask online as we navigate the endless maze of health advice. However, there are increasing incidents involving AI and our trust in it. However, a recent real-life story urges us all, especially those who desire to be healthy, to pause and reflect on the emotional dependence we have on artificial intelligence (AI) when it comes to caring for our wellbeing and health decisions: Can AI be trusted with our health?? A 60-year-old man was hospitalised earlier this year after seeking advice from ChatGPT on reducing sodium chloride (table salt) in his diet, "The Independent" reported. According to a case report published by the "American College of Physicians" (ACP), the man, inspired by his college nutrition background, decided to conduct a personal experiment: swap out table salt for sodium bromide purchased online, as suggested by an AI chatbot. Within months, he began to suffer from paranoia, hallucinations, severe thirst, and unusual skin changes. Online reports reveal that the man had no previous history of mental health or physical illness. Following his ChatGPT-recommended dietary change, he began experiencing hallucinations, skin eruptions and extreme thirst. According to "The Times of India", upon hospital admission, the man, who had no prior mental health or medical conditions, was admitted after experiencing severe paranoia and hallucinations. "He displayed confusion and even refused water, fearing contamination. Doctors diagnosed him with bromide toxicity, a condition now almost unheard of but once common when bromide was prescribed for anxiety, insomnia, and other ailments." After three weeks in the hospital, the man recovered, but not before his story became a wake-up call for anyone who relies on AI for health advice. OpenAI, ChatGPT's developer, explicitly states in its Terms of Use: 'You should not rely on Output from our Services as a sole source of truth or factual information, or as a substitute for professional advice.' The terms also clarify that the service is not intended for diagnosing or treating medical conditions. Researchers at the American College of Physicians (ACP) echo the same sentiments, warning that AI tools can spread health misinformation. 'It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, cannot critically discuss results, and ultimately fuel the spread of misinformation,' the report warned.


Daily Maverick
a day ago
- Daily Maverick
What makes a good AI prompt? Here are 4 expert tips
Being 'AI fluent' is quickly becoming as important as being proficient in office software once was. As tools such as ChatGPT, Copilot and other generative artificial intelligence (AI) systems become part of everyday workflows, more companies are looking for employees who can answer 'yes' to working well with AI. In other words, people who can prompt effectively, think with AI, and use it to boost productivity. In fact, in a growing number of roles, being 'AI fluent' is quickly becoming as important as being proficient in office software once was. But we've all had that moment when we've asked an AI chatbot a question and received what feels like the most generic, surface level answer. The problem isn't the AI – you just haven't given it enough to work with. Think of it this way. During training, the AI will have 'read' virtually everything on the internet. But because it makes predictions, it will give you the most probable, most common response. Without specific guidance, it's like walking into a restaurant and asking for something good. You'll likely get the chicken. Your solution lies in understanding that AI systems excel at adapting to context, but you have to provide it. So how exactly do you do that? Crafting better prompts You may have heard the term 'prompt engineering'. It might sound like you need to design some kind of technical script to get results. But today's chatbots are great at human conversation. The format of your prompt is not that important. The content is. To get the most out of your AI conversations, it's important that you convey a few basics about what you want, and how you want it. Our approach follows the acronym CATS – context, angle, task and style. Context means providing the setting and background information the AI needs. Instead of asking 'How do I write a proposal?' try 'I'm a nonprofit director writing a grant proposal to a foundation that funds environmental education programs for urban schools'. Upload relevant documents, explain your constraints, and describe your specific situation. Angle (or attitude) leverages AI's strength in role-playing and perspective-taking. Rather than getting a neutral response, specify the attitude you want. For example, 'Act as a critical peer reviewer and identify weaknesses in my argument' or 'Take the perspective of a supportive mentor helping me improve this draft'. Task is specifically about what you actually want the AI to do. 'Help me with my presentation' is vague. But 'Give me three ways to make my opening slide more engaging for an audience of small business owners' is actionable. Style harnesses AI's ability to adapt to different formats and audiences. Specify whether you want a formal report, a casual email, bullet points for executives, or an explanation suitable for teenagers. Tell the AI what voice you want to use – for example, a formal academic style, technical, engaging or conversational. Context is everything Besides crafting a clear, effective prompt, you can also focus on managing the surrounding information – that is to say on ' context engineering '. Context engineering refers to everything that surrounds the prompt. That means thinking about the environment and information the AI has access to: its memory function, instructions leading up to the task, prior conversation history, documents you upload, or examples of what good output looks like. You should think about prompting as a conversation. If you're not happy with the first response, push for more, ask for changes, or provide more clarifying information. Don't expect the AI to give a ready-made response. Instead, use it to trigger your own thinking. If you feel the AI has produced a lot of good material but you get stuck, copy the best parts into a fresh session and ask it to summarise and continue from there. Keeping your wits A word of caution though. Don't get seduced by the human-like conversation abilities of these chatbots. Always retain your professional distance and remind yourself that you are the only thinking part in this relationship. And always make sure to check the accuracy of anything an AI produces – errors are increasingly common. AI systems are remarkably capable, but they need you – and human intelligence – to bridge the gap between their vast generic knowledge and your particular situation. Give them enough context to work with, and they might surprise you with how helpful they can be. DM This story first appeared on The Conversation. Sandra Peter is the director of Sydney Executive Plus, Business School at the University of Sydney. Kai Reimer is a professor of Information Technology and Organisation at the University of Sydney.