
Can Artificial Intelligence Chatbots Really Improve Mental Health
Recently, I found myself pouring my heart out, not to a human, but to a chatbot named Wysa on my phone. It nodded - virtually - asked me how I was feeling and gently suggested trying breathing exercises.
As a neuroscientist, I couldn't help but wonder: Was I actually feeling better, or was I just being expertly redirected by a well-trained algorithm? Could a string of code really help calm a storm of emotions?
Artificial intelligence-powered mental health tools are becoming increasingly popular - and increasingly persuasive. But beneath their soothing prompts lie important questions: How effective are these tools? What do we really know about how they work? And what are we giving up in exchange for convenience?
Of course it's an exciting moment for digital mental health. But understanding the trade-offs and limitations of AI-based care is crucial.
Stand-in meditation and therapy apps and bots
AI-based therapy is a relatively new player in the digital therapy field. But the US mental health app market has been booming for the past few years, from apps with free tools that text you back to premium versions with an added feature that gives prompts for breathing exercises.
Headspace and Calm are two of the most well-known meditation and mindfulness apps, offering guided meditations, bedtime stories and calming soundscapes to help users relax and sleep better. Talkspace and BetterHelp go a step further, offering actual licensed therapists via chat, video or voice. The apps Happify and Moodfit aim to boost mood and challenge negative thinking with game-based exercises.
Somewhere in the middle are chatbot therapists like Wysa and Woebot, using AI to mimic real therapeutic conversations, often rooted in cognitive behavioral therapy. These apps typically offer free basic versions, with paid plans ranging from US$10 to $100 per month for more comprehensive features or access to licensed professionals.
While not designed specifically for therapy, conversational tools like ChatGPT have sparked curiosity about AI's emotional intelligence.
Some users have turned to ChatGPT for mental health advice, with mixed outcomes, including a widely reported case in Belgium where a man died by suicide after months of conversations with a chatbot. Elsewhere, a father is seeking answers after his son was fatally shot by police, alleging that distressing conversations with an AI chatbot may have influenced his son's mental state. These cases raise ethical questions about the role of AI in sensitive situations.
Where AI comes in
Whether your brain is spiraling, sulking or just needs a nap, there's a chatbot for that. But can AI really help your brain process complex emotions? Or are people just outsourcing stress to silicon-based support systems that sound empathetic?
And how exactly does AI therapy work inside our brains?
Most AI mental health apps promise some flavor of cognitive behavioral therapy, which is basically structured self-talk for your inner chaos. Think of it as Marie Kondo-ing, the Japanese tidying expert known for helping people keep only what "sparks joy." You identify unhelpful thought patterns like "I'm a failure," examine them, and decide whether they serve you or just create anxiety.
But can a chatbot help you rewire your thoughts? Surprisingly, there's science suggesting it's possible. Studies have shown that digital forms of talk therapy can reduce symptoms of anxiety and depression, especially for mild to moderate cases. In fact, Woebot has published peer-reviewed research showing reduced depressive symptoms in young adults after just two weeks of chatting.
These apps are designed to simulate therapeutic interaction, offering empathy, asking guided questions and walking you through evidence-based tools. The goal is to help with decision-making and self-control, and to help calm the nervous system.
The neuroscience behind cognitive behavioral therapy is solid: It's about activating the brain's executive control centers, helping us shift our attention, challenge automatic thoughts and regulate our emotions.
The question is whether a chatbot can reliably replicate that, and whether our brains actually believe it.
A user's experience, and what it might mean for the brain
"I had a rough week," a friend told me recently. I asked her to try out a mental health chatbot for a few days. She told me the bot replied with an encouraging emoji and a prompt generated by its algorithm to try a calming strategy tailored to her mood. Then, to her surprise, it helped her sleep better by week's end.
As a neuroscientist, I couldn't help but ask: Which neurons in her brain were kicking in to help her feel calm?
This isn't a one-off story. A growing number of user surveys and clinical trials suggest that cognitive behavioral therapy-based chatbot interactions can lead to short-term improvements in mood, focus and even sleep. In randomized studies, users of mental health apps have reported reduced symptoms of depression and anxiety - outcomes that closely align with how in-person cognitive behavioral therapy influences the brain.
Several studies show that therapy chatbots can actually help people feel better. In one clinical trial, a chatbot called "Therabot" helped reduce depression and anxiety symptoms by nearly half - similar to what people experience with human therapists. Other research, including a review of over 80 studies, found that AI chatbots are especially helpful for improving mood, reducing stress and even helping people sleep better. In one study, a chatbot outperformed a self-help book in boosting mental health after just two weeks.
While people often report feeling better after using these chatbots, scientists haven't yet confirmed exactly what's happening in the brain during those interactions. In other words, we know they work for many people, but we're still learning how and why.
Red flags and risks
Apps like Wysa have earned FDA Breakthrough Device designation, a status that fast-tracks promising technologies for serious conditions, suggesting they may offer real clinical benefit. Woebot, similarly, runs randomized clinical trials showing improved depression and anxiety symptoms in new moms and college students.
While many mental health apps boast labels like "clinically validated" or "FDA approved," those claims are often unverified. A review of top apps found that most made bold claims, but fewer than 22% cited actual scientific studies to back them up.
In addition, chatbots collect sensitive information about your mood metrics, triggers and personal stories. What if that data winds up in third-party hands such as advertisers, employers or hackers, a scenario that has occurred with genetic data? In a 2023 breach, nearly 7 million users of the DNA testing company 23andMe had their DNA and personal details exposed after hackers used previously leaked passwords to break into their accounts. Regulators later fined the company more than $2 million for failing to protect user data.
Unlike clinicians, bots aren't bound by counseling ethics or privacy laws regarding medical information. You might be getting a form of cognitive behavioral therapy, but you're also feeding a database.
And sure, bots can guide you through breathing exercises or prompt cognitive reappraisal, but when faced with emotional complexity or crisis, they're often out of their depth. Human therapists tap into nuance, past trauma, empathy and live feedback loops. Can an algorithm say "I hear you" with genuine understanding? Neuroscience suggests that supportive human connection activates social brain networks that AI can't reach.
So while in mild to moderate cases bot-delivered cognitive behavioral therapy may offer short-term symptom relief, it's important to be aware of their limitations. For the time being, pairing bots with human care - rather than replacing it - is the safest move.
(Author: Pooja Shree Chettiar, Ph.D. Candidate in Medical Sciences, Texas A&M University)
(Disclaimer Statement: Pooja Shree Chettiar does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.)

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Time of India
41 minutes ago
- Time of India
Brutal CEO cut 80% of workers who rejected AI - 2 years later he says he would do it again
When most leaders cautiously tested AI, Eric Vaughan, the CEO of IgniteTech, took a gamble that shocked the tech world. The IgniteTech CEO replaced nearly 80% of his staff in a bold move to make artificial intelligence the company's foundation. His decision, controversial yet transformative, shows the brutal reality of adapting to disruption. Vaughan's story shows that businesses must change their culture, not just their technology, to thrive in the AI era. In early 2023, IgniteTech CEO Eric Vaughan faced one of his toughest decisions. Convinced that artificial intelligence was not just a tool but an existential shift for every business, he dismantled his company's traditional structure. ALSO READ: Orca attack mystery: What really happened to marine trainer Jessica Radcliffe Why did IgniteTech face resistance to AI? Live Events When Vaughan first pushed the company to use AI, he spent a lot of money on training. Mondays turned into "AI Mondays," which were only for learning new skills, trying out new tools, and starting pilot projects. IgniteTech paid for employees to take AI-related courses and even brought in outside experts to help with adoption, as per a report by Fortune. However, resistance emerged rapidly. It was surprising that the most pushback came from technical employees, not sales or marketing. A lot of people were doubtful about what AI could do, focusing on what it couldn't do instead of what it could do. Some people openly refused to take part, while others worked against the projects. Vaughan said that the resistance was so strong that it was almost sabotage. His experience is backed up by research. According to a 2025 report on enterprise AI adoption, one in three workers said they were against or even sabotaging AI projects, usually because they were afraid of losing their jobs or were frustrated with tools that weren't fully developed, as per a report by Fortune. ALSO READ: Apple iPhone 17 Air and Pro get surprise release date change — here's the new timeline How did Vaughan rebuild the company? Vaughan came to the conclusion that believing in AI was not up for debate. Instead of making his current employees change, he started hiring new people who shared his vision. He called these new hires "AI innovation specialists." This change affected every department, including sales and finance, as per a report by Fortune. Thibault Bridel-Bertomeu, IgniteTech's new chief AI officer, was a key hire. Vaughan reorganized the company so that every division reported to AI after he joined. This centralization stopped things from being done twice and made it easier for people to work together, which is a common problem when companies use AI. The change was expensive, disruptive, and emotionally draining, but Vaughan says it had to said, "It was harder to change minds than to add skills,' as per a report by Fortune. What can other companies learn from this? Even though it hurt, IgniteTech got a lot of benefits. By the end of 2024, it had released two AI solutions that were still in the patent process. One of them was Eloquens AI, an email automation platform. Revenue remained in the nine-figure range, with profit margins near 75% Ebitda. During the chaos, the company even made a big purchase. ALSO READ: Alien Attack in November? Harvard scientists warn mysterious space object could be advanced technology Vaughan's story teaches us a crucial lesson: using AI is as much about culture as it is about technology. While companies like Ikea focus on augmenting workers instead of replacing them, Vaughan chose radical restructuring to ensure alignment. Both methods show how hard it is for businesses to find a balance between trust and innovation. FAQs Why did Eric Vaughan fire so many people at IgniteTech? He thought that people who didn't want to use AI would hurt the company's future, so he decided to rebuild with people who shared his vision. What happened after IgniteTech changed its AI? The company introduced new AI products, set up a central AI division, and made more money, even though the change was hard.


Time of India
2 hours ago
- Time of India
Free GPT-5 access heats up AI rivalry: Who wins, who loses?
The past fortnight has been incredible in terms of the buzzing AI space, particularly the open source AI domain. First with the coming of Perplexity Comet, and now with the release of GPT 5, the sector is literally taking neckbreak competition to another level. OpenAI has unveiled GPT-5, its most sophisticated artificial intelligence model yet, and the best part is that it's now available for free to all ChatGPT users—albeit with some usage the first time, free-tier users can access a reasoning-capable AI model—GPT-5—unlocking advanced capabilities such as multi-step reasoning, better understanding of complex prompts, and significantly improved accuracy. When users hit their usage cap, the system seamlessly pivots to a lighter but still capable version called GPT-5 Mini. Plus-tier subscribers can enjoy elevated limits, while Pro and Team users gain unlimited access, including access to GPT-5 Pro, a version tailored for deep, intricate reasoning tasks. GenAI race gets hotter "This is a very happening space. Sam Altman sees the GPT-5 as a step toward ethical AI but warns against over-reliance. Elon Musk, on the other hand, warns of OpenAI's growing influence, citing geopolitical risks. There are other leaders who call it solid but overhyped, exposing LLM limits and pushing hybrid AI approaches," says Gopi Thangavel, Group CIO at Larsen & Toubro, commenting on the development. "This can be the way to crush the competition -- a strategy used by most. For example, Perplexity is free for India. Moreover, OpenAI has removed other models as they see GPT-5 as more stable and advanced compared to previous models. If you see, they timed it well -- giving previous lighter models as free as OSS 20b and 120b and then launched GPT-5. Besides, GPT 5 is much better than previous models in coding," says Gaurav Rawat, an AI leader with a leading financial sector company and who has experience across the auto and pharma sectors as well. Designed on the GPT 4 model, GPT-5 is able to offer better detailed and thoughtful answers by analysing intricate tasks such as science questions, coding, information synthesis, or financial analysis. It's also able to return more accurate as well as faster responses as compared to its past models. The new model has the potential to work on various data forms, including images and text, all at the same time. This primarily translates into the fact that somebody can ask the model a question, put up a picture, or even share a document. The GPT 5 can comprehend as well as respond without the need of separate tools. Such a capability wasn't there before. Free GPT5 sustainable? "This development is reflective of the competition in this space, which is huge. Other than a few, LLMs are struggling to make money. They spend a lot developing them but are unable to get it back," says Avik Sarkar, Senior Research Fellow & Visiting Faculty at Indian School of Business. "Nothing is ever truly free — when it's free, you are the product. OpenAI's decision to make GPT‑5 accessible to everyone is a smart play: offer a preview, gauge user engagement, shape future pricing, grow the customer base — and at the same time, turn up the competitive heat. It's the same psychology as letting readers preview a few Kindle pages before buying the book — just at an AI scale. This move will fortify Microsoft's ecosystem and influence. But make no mistake — GPT‑5's capabilities will reshape the competitive landscape. The real question: Which rival will be the first to reveal what they've been hiding under the hood?" comments Rajendra Deshpande, former CIO at Intelenet Global Services.


Time of India
2 hours ago
- Time of India
Future proofing APAC: Building the skills for AI powered economy
By John Lombard Artificial Intelligence (AI) is no longer a distant vision—it is now the operational backbone of industries across the Asia-Pacific (APAC) region. From predictive analytics in manufacturing to generative AI (GenAI) in customer service, AI adoption is reshaping economic structures. Yet, adoption is far from uniform. Advanced economies such as Japan, Singapore, South Korea, and China are deploying AI at scale, while others are still building foundational infrastructure and digital readiness. One of the most pressing challenges is not technological capability but human capability. Without a skilled workforce, AI adoption risks creating a gap between potential and performance—a high-speed engine without the tracks to run on. The skills gap in an accelerating AI landscape According to NTT DATA's recent Global GenAI Report, nearly 70% of organizations view AI as a game changer, and almost two-thirds plan to significantly invest in GenAI over the next two years. But investment alone is insufficient. The talent pool trained to design, deploy, and govern AI systems is lagging behind. In many organizations, innovation in AI is outpacing workforce readiness and governance frameworks. Employees often feel underprepared—not due to resistance, but because training, role-specific upskilling, and AI literacy have not kept pace with technological change. Cultural diversity, language barriers, and varied education systems across APAC further complicate the creation of a region-wide skilled AI workforce. To bridge this divide, enterprises need structured AI and GenAI talent development frameworks that are scalable, measurable, and adaptable to evolving technologies. Such frameworks should: • Provide foundational AI literacy for all employees, regardless of role. • Offer role-based practical training for professionals in technical and non-technical functions. • Develop certified specialists with deep domain expertise in AI deployment. • Cultivate strategic leaders capable of driving AI innovation and governance at an enterprise scale. This tiered approach allows organizations to embed AI capabilities at every operational layer—from frontline staff to the C-suite. Importantly, in-house trainers and industry-specific learning modules can make training more relevant and impactful. In the enterprise technology context, AI skills cannot be siloed. They need to intersect with other capabilities such as: • Cybersecurity – safeguarding AI models and data pipelines from vulnerabilities. • Data Science & Machine Learning – building and refining predictive and generative models. • Cloud Computing – enabling scalable AI deployments across geographies. • Ethical AI Governance – ensuring accuracy, bias mitigation, and regulatory compliance. These competencies will be essential as AI becomes integrated into everything from supply chain systems to customer engagement platforms. Responsible AI adoption Expanding AI use also means expanding accountability. Governance models must address risks such as bias in outputs, misinformation, intellectual property concerns, and data leakage. Organizations should invest in transparent governance frameworks, clear audit trails, and training that empowers employees to identify and mitigate AI risks. Responsible AI adoption is not just about compliance—it is about building trust with employees, customers, and regulators. In a region as diverse as APAC, this trust will be a competitive differentiator. The leadership imperative The role of leadership in AI transformation extends beyond technology investment. Executives must actively participate in upskilling, signal commitment to continuous learning, and ensure inclusivity in training initiatives. By aligning AI innovation agendas with workforce development strategies, leaders can create sustainable adoption rather than short-term experimentation. Ultimately, the future of APAC's AI economy will depend on how effectively the region matches technological advances with human capabilities. Leaders who act today—investing in both AI systems and the people who operate them—will define the competitive, ethical, and sustainable AI landscape of tomorrow. The author is CEO, APAC, NTT DATA. The views expressed are solely of the author and ETCIO does not necessarily subscribe to it. ETCIO shall not be responsible for any damage caused to any person/organization directly or indirectly.