
The AI therapist will see you now: Can chatbots really improve mental health?
As a neuroscientist, I couldn't help but wonder: Was I actually feeling better, or was I just being expertly redirected by a well-trained algorithm? Could a string of code really help calm a storm of emotions?
Artificial intelligence-powered
mental health
tools are becoming increasingly popular - and increasingly persuasive. But beneath their soothing prompts lie important questions: How effective are these tools? What do we really know about how they work? And what are we giving up in exchange for convenience?
Of course it's an exciting moment for digital mental health. But understanding the trade-offs and limitations of AI-based care is crucial.
Stand-in meditation and therapy apps and bots
AI-based therapy is a relatively new player in the digital therapy field. But the US mental health app market has been booming for the past few years, from apps with free tools that text you back to premium versions with an added feature that gives prompts for breathing exercises.
Headspace and Calm are two of the most well-known meditation and mindfulness apps, offering guided meditations, bedtime stories and calming soundscapes to help users relax and sleep better.
Talkspace
and BetterHelp go a step further, offering actual licensed therapists via chat, video or voice. The apps Happify and Moodfit aim to boost mood and challenge negative thinking with game-based exercises.
Somewhere in the middle are chatbot therapists like Wysa and
Woebot
, using AI to mimic real therapeutic conversations, often rooted in cognitive behavioral therapy. These apps typically offer free basic versions, with paid plans ranging from USD 10 to USD 100 per month for more comprehensive features or access to licensed professionals.
While not designed specifically for therapy, conversational tools like ChatGPT have sparked curiosity about AI's emotional intelligence.
Some users have turned to ChatGPT for mental health advice, with mixed outcomes, including a widely reported case in Belgium where a man died by suicide after months of conversations with a chatbot.
Elsewhere, a father is seeking answers after his son was fatally shot by police, alleging that distressing conversations with an AI chatbot may have influenced his son's mental state. These cases raise ethical questions about the role of AI in sensitive situations.
Where AI comes in
Whether your brain is spiralling, sulking or just needs a nap, there's a chatbot for that. But can AI really help your brain process complex emotions? Or are people just outsourcing stress to silicon-based support systems that sound empathetic?
And how exactly does AI therapy work inside our brains?
Most AI mental health apps promise some flavor of cognitive behavioral therapy, which is basically structured self-talk for your inner chaos. Think of it as Marie Kondo-ing, the Japanese tidying expert known for helping people keep only what "sparks joy." You identify unhelpful thought patterns like "I'm a failure," examine them, and decide whether they serve you or just create anxiety.
But can a chatbot help you rewire your thoughts? Surprisingly, there's science suggesting it's possible. Studies have shown that digital forms of talk therapy can reduce symptoms of anxiety and depression, especially for mild to moderate cases. In fact, Woebot has published peer-reviewed research showing reduced depressive symptoms in young adults after just two weeks of chatting.
These apps are designed to simulate therapeutic interaction, offering empathy, asking guided questions and walking you through evidence-based tools. The goal is to help with decision-making and self-control, and to help calm the nervous system.
The neuroscience behind cognitive behavioral therapy is solid: It's about activating the brain's executive control centres, helping us shift our attention, challenge automatic thoughts and regulate our emotions.
The question is whether a chatbot can reliably replicate that, and whether our brains actually believe it.
A user's experience, and what it might mean for the brain
"I had a rough week," a friend told me recently. I asked her to try out a mental health chatbot for a few days. She told me the bot replied with an encouraging emoji and a prompt generated by its algorithm to try a calming strategy tailored to her mood. Then, to her surprise, it helped her sleep better by week's end.
As a neuroscientist, I couldn't help but ask: Which neurons in her brain were kicking in to help her feel calm?
This isn't a one-off story. A growing number of user surveys and clinical trials suggest that cognitive behavioral therapy-based chatbot interactions can lead to short-term improvements in mood, focus and even sleep. In randomised studies, users of mental health apps have reported reduced symptoms of depression and anxiety - outcomes that closely align with how in-person cognitive behavioral therapy influences the brain.
Several studies show that therapy
chatbots
can actually help people feel better. In one clinical trial, a chatbot called "Therabot" helped reduce depression and anxiety symptoms by nearly half - similar to what people experience with human therapists.
Other research, including a review of over 80 studies, found that AI chatbots are especially helpful for improving mood, reducing stress and even helping people sleep better. In one study, a chatbot outperformed a self-help book in boosting mental health after just two weeks.
While people often report feeling better after using these chatbots, scientists haven't yet confirmed exactly what's happening in the brain during those interactions. In other words, we know they work for many people, but we're still learning how and why.
Red flags and risks
Apps like Wysa have earned
FDA
Breakthrough Device designation, a status that fast-tracks promising technologies for serious conditions, suggesting they may offer real clinical benefit. Woebot, similarly, runs randomised clinical trials showing improved depression and anxiety symptoms in new moms and college students.
While many mental health apps boast labels like "clinically validated" or "FDA approved," those claims are often unverified. A review of top apps found that most made bold claims, but fewer than 22 per cent cited actual scientific studies to back them up.
In addition, chatbots collect sensitive information about your mood metrics, triggers and personal stories. What if that data winds up in third-party hands such as advertisers, employers or hackers, a scenario that has occurred with genetic data?
In a 2023 breach, nearly 7 million users of the DNA testing company 23andMe had their DNA and personal details exposed after hackers used previously leaked passwords to break into their accounts. Regulators later fined the company more than USD 2 million for failing to protect user data.
Unlike clinicians, bots aren't bound by counselling ethics or privacy laws regarding medical information. You might be getting a form of cognitive behavioral therapy, but you're also feeding a database.
And sure, bots can guide you through breathing exercises or prompt cognitive reappraisal, but when faced with emotional complexity or crisis, they're often out of their depth. Human therapists tap into nuance, past trauma, empathy and live feedback loops. Can an algorithm say "I hear you" with genuine understanding? Neuroscience suggests that supportive human connection activates social brain networks that AI can't reach.
So while in mild to moderate cases bot-delivered cognitive behavioral therapy may offer short-term symptom relief, it's important to be aware of their limitations. For the time being, pairing bots with human care - rather than replacing it - is the safest move. (The Conversation)

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Economic Times
5 hours ago
- Economic Times
OpenAI unveils ChatGPT agent to handle tasks as AI apps evolve
OpenAI launched an artificial intelligence agent for its popular chatbot ChatGPT on Thursday that can complete complex tasks as the Microsoft-backed startup looks to get ahead of competitors in the AI race. AI agents - considered to be an evolution of an assistant - have been embraced across the tech world with large firms, including Microsoft, Salesforce and Oracle, spending billions on the technology to boost productivity and make operations more cost efficient. OpenAI's agent will combine aspects of its previous agentic features, operator, which can interact with websites, and deep research that can conduct multi-step research for advanced tasks. Starting Thursday, users of ChatGPT's Pro, Plus and Team tiers can activate the chatbot's agentic capabilities. ChatGPT's agent can complete tasks such as ordering an outfit for a wedding while taking into account factors like dress code and weather. The chatbot does this by using its own virtual computer equipped with a number of tools that can interact with the web. It also allows the user to connect apps such as Gmail and Github so ChatGPT can find information relevant to a prompt.


Time of India
5 hours ago
- Time of India
Is Dr AI good for your health?
When we asked 21-year-old Harsheeta Gandhi from New Delhi how often she uses AI for medical help, she admits, 'I've honestly become a bit addicted to using AI tools to check symptoms or understand how I'm feeling—physically or mentally. Whether it's a sudden skin breakout or an unusual pain, I turn to apps like Gemini or ChatGPT for quick answers, instead of stressing or waiting for a doctor's appointment.' Similarly, Rohan Jha (29), also from New Delhi, uses an AI symptom checker whenever he experiences a mild headache or cold. 'It helps me stay calm and monitor my health. If things worsen or new symptoms appear, then I consult a doctor,' he says. Like them, many people across age groups are now using AI as a first step for medical help. Yet, despite its rapid development, a World Economic Forum report notes healthcare is still slow to adopt AI compared to other sectors. - Searches for AI Symptom Checker increased 134.3% in 2024 compared to 2023, showing that more people are turning to AI to check their health - Searches for AI Doctor also rose by 129.8% in the same period - Meanwhile, searches for AI for medical diagnosis increased by 49.3% , suggesting people are becoming more confident in using AI for health-related advice ( as per Google Trends 2024 ) WHY PEOPLE ARE USING AI AI uses simple language without medical jargon For Faridabad-based Tanya Bhatia (27), what appeals to her is how clearly AI explains things. 'I like that it breaks things down in a simple way. I've tried online symptom checkers, but ChatGPT and Gemini feel more helpful because I can ask follow-up questions. They help me understand possible causes without using too many medical terms,' she says. Affordable 'I chose to use an AI tool instead of seeing a doctor mainly because it's convenient, affordable, and saves time, especially for minor health issues. I know these tools aren't perfect and can't replace a qualified medical professional, but they're a helpful first step in understanding everyday health concerns,' shares Jha. 'AI said it's nothing serious' Both Harsheeta and Tanya admit skipping doctor visits when AI tools suggested their symptoms were minor. 'I've avoided seeing a doctor because AI said it was likely nothing serious, and I felt better after rest or basic meds,' says Harsheeta. Tanya shares, 'There were times I skipped visiting a doctor because AI made it seem like a minor issue, like a common cold. But I've never taken medication purely based on AI advice—I always double-check or wait to see if things get better.' Making medical reports readable From identifying the most suitable doctor for a particular medical condition to simplifying complex terminology, AI tools are helping users better understand medical jargon. After using ChatGPT to interpret his medical reports, a Reddit user remarked: 'Totally worth it, as long as you're aware of the risks concerning your personal health information and understand that what ChatGPT says might not be entirely accurate. I used it to review my medical reports, explore treatment options and procedures, and subsequently consulted my GP for further discussion.' Time-saving Simran Wadhwa (28), 'I usually use AI tools when I'm unsure about seeing a doctor. Sometimes it's just quicker and easier than booking an appointment, especially late at night or on weekends. Though at times they cause unnecessary worry. I've learnt to treat the results as a starting point, not a final answer.' In another case, Noida-based Divya Dwivedi adds, 'I don't use AI tools as a substitute for visiting a doctor. Rather, they help me better understand my symptoms and assess whether a doctor's visit might be necessary.' IS IT WISE TO AVOID SEEING A DOCTOR? Risk of misdiagnosis Mumbai-based chief pathologist Dr Rajesh Bendre, explains, 'Many people may misread their symptoms, leading to incorrect conclusions, unnecessary panic, or even self-medication. Symptoms of various health conditions often overlap, which can result in misdiagnosis.' He adds, 'Most AI tools are built using data from Western populations, which poorly represent the Indian context, and this limits their accuracy.' Anxiety from AI predictions 'Many patients tend to look up their symptoms using ChatGPT, Google Bard, or health apps, often jumping to extreme conclusions on their own. These assumptions can lead to unnecessary anxiety or, worse, missed serious conditions due to a lack of proper clinical evaluation,' shares Dr Prashant Chandra, a consultant - surgical oncology. He adds, 'People may dwell on inaccurate AI-generated diagnoses, which can trigger overthinking, stress, and anxiety. Always consult a qualified doctor before beginning any treatment.' WHY AI CAN'T FULLY REPLACE HUMAN DOCTORS As per AI experts, with all AI tools, relying on medical advice without a doctor's input can be risky. 'This applies not just to rare diseases but also to common ones. AI can help us learn more, do better research, and even discover new things—but expert guidance is still very important. A doctor's oversight should always be part of the process,' shares Jaspreet Bindra, an AI expert. He further states, 'AI often misses important context, like emotional distress or personal history. Human doctors treat the patient as a whole, not just a set of symptoms. They consider things like stress, lifestyle, or hidden symptoms that AI may overlook. That's why human oversight is still essential when using AI for medical advice.' In my experience, the AI tool didn't give the exact diagnosis but often came close. For example, it suggested a muscle strain, which aligned fairly well with the doctor's final diagnosis of a mild sprain New Delhi-based Rohan Jha It's important to see AI as a supportive aid, not a substitute for a doctor. Don't follow trends blindly. The increasing reliance on AI is worrying. The human touch and empathy a medical professional offers can never be replaced by technology Mumbai-based Dr Rajesh Bendre


Time of India
5 hours ago
- Time of India
OpenAI unveils ChatGPT agent to handle tasks as AI apps evolve
Academy Empower your mind, elevate your skills OpenAI launched an artificial intelligence agent for its popular chatbot ChatGPT on Thursday that can complete complex tasks as the Microsoft-backed startup looks to get ahead of competitors in the AI agents - considered to be an evolution of an assistant - have been embraced across the tech world with large firms, including Microsoft, Salesforce and Oracle , spending billions on the technology to boost productivity and make operations more cost agent will combine aspects of its previous agentic features, operator, which can interact with websites, and deep research that can conduct multi-step research for advanced Thursday, users of ChatGPT's Pro, Plus and Team tiers can activate the chatbot's agentic agent can complete tasks such as ordering an outfit for a wedding while taking into account factors like dress code and chatbot does this by using its own virtual computer equipped with a number of tools that can interact with the web. It also allows the user to connect apps such as Gmail and Github so ChatGPT can find information relevant to a prompt.