
Chatbot therapy? Available 24/7 but users beware
Hit play on the player below to hear the podcast and follow along with the transcript beneath it. This transcript was automatically generated, and then edited for clarity in its current form. There may be some differences between the audio and the text.
Dana Taylor:
Hello, I'm Dana Taylor, and this is a special episode of The Excerpt. The proliferation of chatbots has people using them in a myriad of ways. Some see them as friends and confidants, as Meta CEO Mark Zuckerberg has suggested. And in certain cases, even as therapists. And actual therapists are expressing concern. Therapy is a licensed profession for many good reasons.
Notably, some chatbots have wandered into dangerous territory, allegedly suggesting that a user kill themselves and even telling them how they could do it. The American Psychological Association has responded by asking the Federal Trade Commission to start investigating chatbots that claim to be mental health professionals. Still, with mental health a rising issue and loneliness and epidemic, could bots help with the lack of supply with proper oversight or warnings?
Vaile Wright, Senior Director of Healthcare Innovation at the American Psychological Association, is here to unpack what's happening for human therapists as they fight an onslaught of AI therapy impersonators. Vaile, thank you for joining me.
Vaile Wright:
Thanks so much for having me.
Dana Taylor:
Can you set the stage here? Your organization's chief executive cited two court cases when he presented to a Federal Trade Commission panel about the concerns of professional psychologists. What are the real life harms he pointed to?
Vaile Wright:
I think we see a future where you're going to have AI mental health chatbots that are rooted in psychological science, have been rigorously tested or co-created with experts for the purpose of addressing mental health needs. But that's not what's currently available on the market. What is available are these chatbots that click none of those boxes, but are being used by people to address their mental well-being.
And the challenge is that because these AI chatbots are not being monitored by humans who know what good mental health care is, they go rogue and they say very harmful things. And people have a tendency to have an automation bias, and so they trust the technology over their own gut.
Dana Taylor:
What do these cases show about what could occur when AI chatbots moonlight as licensed therapists?
Vaile Wright:
When these chatbots refer to themselves as psychologists or therapists, they are presenting a certain level of credibility that doesn't actually exist. There is no expert behind these chatbots offering what we know is good psychological science. Instead, where the expertise lies is actually on the back end, where these chatbots are developed by coders to be overly validating to just tell the person exactly what they want to hear and be appealing to the point of almost being sycophantic.
And that's the opposite of what therapy is. Yes, I want to validate as a therapist, but I'm also there to help point out when you're engaging in unhelpful thinking or behaviors, and these chatbots just don't do that. They, in fact, encourage some of that unhelpful, unhealthy behavior.
Dana Taylor:
Experts have described AI-powered chatbots as simply following patterns, and there's been conversation around chatbots telling users what they want to hear, being overly complimentary, as you've said. At worst, the response can be downright dangerous, like encouraging illicit drug use or as I mentioned in the intro, encouraging someone to take their own lives and then suggesting how they do that. Given all that, what are some of the regulations that professionals in your community would like to see? Is there a way for chatbots to responsibly help with therapy?
Vaile Wright:
I think that there is a way for chatbots to responsibly help with therapy. In certain cases, I think at a very minimum, these chatbots should not be allowed to refer to themselves as a licensed professional, not just as a licensed psychologist. We wouldn't want them to present themselves as a licensed attorney or a licensed CPA and offering advice. So I think that's at a minimum. I think we need more disclaimers that these are not humans.
I think just saying it once to a consumer is just not sufficient. I think that we need some surveillance of the types of chats that's happening, particularly having to report out by these companies when they're noticing harmful discussions around suicidal ideation or suicidal behavior or violence of that type. So I think there are a variety of different things that we could see happening, but we need probably some regulatory body to insist that these companies do it.
Dana Taylor:
Are there any other protections proposed by the AI companies themselves that you see as having merit?
Vaile Wright:
I think because of this increased attention on how these chatbots are operating, you are seeing some changes around it, maybe age verification or offering resources like 911 or 988 pop up when they detect something that maybe is unhelpful, but I think they need to go even further.
Dana Taylor:
For young people in particular when using a chatbot, it can be difficult to recognize that they're dealing with a chatbot to begin with. Will it continue to get more difficult as the tech evolves, and does that mean it could be more dangerous for young people in the years to come?
Vaile Wright:
It's clear that the technology is getting more and more sophisticated, and it is really challenging I think for everybody to really be able to tell that these are not humans. They are built to sound and respond like humans. And with younger people who maybe are just more emotionally vulnerable, are also not as developmentally long in terms of their cognition and their, again, sense of being able to listen to your own gut, I do get worried that these digital natives, who have been interacting seamlessly with technology since the beginning, are just not going to be able to discern when the technology is going rogue or being truly harmful.
Dana Taylor:
Vaile, depending on where a patient lives or for other reasons, there can be a long wait list to see a therapist. Are there are some benefits that a bot can provide due to the fact that it's not human and is virtually available 24/7?
Vaile Wright:
Again, I think bots that are going to be developed for these purposes can be immensely helpful. And in fact, some of the bots that currently exist we do know anecdotally have had benefits. So for example, if it's 2:00 in the morning and I'm experiencing distress, even if I had a therapist, I can't call them at 2:00 in the morning. But if I had a chatbot that could provide me with some support, maybe encourage some strong healthy coping skills, I do see some benefit in that.
We've also heard from the neurodivergent community that these chatbots provide them an opportunity to practice their social skills. So I think knowing that these can have some benefit, how do we capitalize on ensuring that whatever emerging technologies we build and offer are safe and effective because we can't just keep doing therapy with one model.
We can't expect everybody to be able to see a face-to-face individual on a weekly basis because the supply is just too insufficient. So we have to think outside the box.
Dana Taylor:
Are you aware of human therapists that are joining forces today with chatbots to meet this overwhelming need for therapy?
Vaile Wright:
Yeah. Subject matter experts, whether it's psychologists or other therapists, play a critical role in ensuring that these technologies are safe and effective. There was a new study that came out of Dartmouth recently that looked at a mental health therapy chatbot called Therabot that, again, showed some really strong outcomes in improving depression, anxiety, and eating disorders. And that's an example of how you bring the researchers and the technologists together to develop products that are safe, effective, responsible, and ethical.
Dana Taylor:
Some high school counselors are providing chatbots to answer students' questions. Some see it as filling a gap. But does this prevent young people from social capital, the ties in human interaction, that can often make anyone feel more connected to others, their community, and therefore less alone?
Vaile Wright:
It's clear that young people are feeling disconnected and lonely. We did a survey recently where 71% of 18 to 34 year olds said that they don't feel like they can talk about their stress with others because they don't want to burden people. So how do we take that understanding and recognize why people are using these chatbots to fill these gaps while also helping people really appreciate the value of human connection?
I don't want the conversation to always be AI versus humans. It's really about what does AI do really well, what do humans do really well, and how can we capitalize on both of those things together to help people reduce their suffering faster?
Dana Taylor:
What's the biggest takeaway that you'd like people to walk away with when it comes to chatbots and therapy?
Vaile Wright:
AI isn't going anywhere. People for centuries have always tried to seek out self-help ways to address their emotional well-being. That used to be Google docking doctor. Now it's chatbots. So we can't stop people from using them. And as we talked about, there could be some benefits to it, but how do we help consumers understand that there may be better options out there, better chatbot options even, and helping them be more digitally literate to understand when a particular chatbot maybe is not only just not being helpful, but actually harmful.
Dana Taylor:
Vaile, thank you for being on The Excerpt.
Vaile Wright:
Thanks so much for having me.
Dana Taylor:
Thanks for our senior producers Shannon Ray Green and Kaylee Monahan for their production assistance. Our executive producers Laura Beatty. Let us know what you think of this episode by sending a note to podcasts@usatoday.com. Thanks for listening. I'm Dana Taylor. Taylor Wilson will be back tomorrow morning with another episode of The Excerpt.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles
Yahoo
43 minutes ago
- Yahoo
Alibaba Stock Climbs on AI?Powered Smart Glasses Debut
July 28 - Shares of Alibaba (NYSE:BABA) climbed about 2% in early trading Monday after the e?commerce giant unveiled its first AI?powered smart glasses. Warning! GuruFocus has detected 2 Warning Signs with BABA. The Quark AI Glasses will arrive in China by year?end 2025, featuring hardware driven by Alibaba's Qwen large language model and its Quark AI assistant. Users can make hands?free calls, stream music, translate languages in real time and transcribe meetings via a built?in camera. Wearables stand as the next frontier alongside smartphones, and Alibaba aims to embed Quark more deeply into daily life. The glasses integrate with the company's ecosystem, offering navigation, Alipay payments and Taobao price comparisons without a phone. Alibaba faces a direct rivalry with Meta (NASDAQ:META) that partnered with Ray?Ban on its smart eyewear, and Chinese tech giant Xiaomi, whose own AI glasses appeared in early 2022. Hangzhou-based company has not disclosed the pricing or detailed specifications of the product, but the move highlights its aim at targeting consumer-level hardware and AI services. Shareholders will monitor how the new product can enable Alibaba to build a revenue diversification policy and strengthen revenues in its huge technology lineup. Based on the one year price targets offered by 38 analysts, the average target price for Alibaba Group Holding Ltd is $149.53 with a high estimate of $176.15 and a low estimate of $102.26. The average target implies a upside of +24.57% from the current price of $120.03. Based on GuruFocus estimates, the estimated GF Value for Alibaba Group Holding Ltd in one year is $111.74, suggesting a downside of -6.91% from the current price of $120.03. Gf value is Gurufocus' estimate of the fair value that the stock should be traded at. It is calculated based on the historical multiples the stock has traded at previously, as well as past business growth and the future estimates of the business' performance. For deeper insights, visit the forecast page. This article first appeared on GuruFocus. Error while retrieving data Sign in to access your portfolio Error while retrieving data Error while retrieving data Error while retrieving data Error while retrieving data


Entrepreneur
44 minutes ago
- Entrepreneur
Is Voice AI Becoming India's Next Digital Backbone?
According to NASSCOM, the Indian voice AI market is projected to reach USD 1.82 billion by 2030 Opinions expressed by Entrepreneur contributors are their own. You're reading Entrepreneur India, an international franchise of Entrepreneur Media. Voice AI is quickly becoming the new battleground in shaping the future of human-machine interactions. The recent USD 45 million acquisition of a voice AI startup Play AI by Meta brought renewed attention to the space. But why the Sudden Rush in India? "There's a rush towards voice tech startups because the country's vast linguistic diversity and rising demand for high-quality, real-time voice translation have made voice AI a natural solution," explains Ganesh Gopalan, Co-founder & CEO of Gnani AI. "With the rapid adoption of smartphones and consumers increasingly expecting seamless, human-like interactions, voice is emerging as the preferred interface for digital engagement." According to NASSCOM, the Indian voice AI market is projected to reach USD 1.82 billion by 2030. While India has 22 official languages, it is home to over 400 living languages. English, often assumed to be the digital default, is neither the first spoken nor written language for the majority of Indians. Until now, much of emerging tech has catered only to metro markets and English-speaking audiences. Voice-led AI startups, however, are disrupting that trend. Indian entrepreneurs are now tapping deeper into Tier 2/3 markets, targeting vernacular language speakers and building inclusive solutions for non-English and non-Hindi audiences. Where the action is Gopalan notes that sectors like banking, finance, and insurance (BFSI) have seen the most traction. "Voice AI is being used for customer support, lead qualification, EMI collections, policy renewals, and reminders. This growth ties closely to India's digital inclusion push, enabling businesses to engage a much wider audience in their native languages." India is also becoming a strategic growth market for global Voice AI firms. ElevenLabs, for instance, recorded a 50 per cent growth in usage in India between November and January, making the country its fastest-growing market globally. Siddharth Srinivasan, GMT–India at ElevenLabs, observes, "India was always a market waiting for a solution in this space. We're inherently multilingual, most of us are bilingual or trilingual. The need for high-quality, real-time voice solutions has always existed." Still early days? But is this rush solving meaningful, scalable problems, or are we still in an experimentation phase? Arjun Malhotra, General Partner at Good Capital, believes the sector is at "an interesting middle ground." "In BFSI, voice AI is solving real operational challenges around lending and collections at scale. Companies are successfully reaching lakhs of customers simultaneously. However, the broader ecosystem is still evolving. While enterprise applications have found clear product-market fit in certain use cases, consumer applications remain largely in the discovery phase." From an investor's perspective, technical differentiation is key. "Given the competitive landscape, we evaluate whether startups are building foundational technology or merely implementing existing solutions," Malhotra explains. "Companies that differentiate on the core mechanics of voice AI rather than just the application layer have stronger moats." He also emphasises the importance of domain expertise. "Voice AI requires deep technical expertise combined with domain knowledge. We look for teams that understand both the technology's limitations and the specific market needs they're addressing." The bigger question still remains. Can Voice AI become foundational digital infrastructure? Malhotra thinks the answer depends on the use case. "In enterprise contexts, we're seeing voice AI evolve from a feature (like automated calling) to a platform that can handle complex workflows and multiple touchpoints." The opportunity, he adds, lies in companies that can expand beyond single-use cases and integrate deeply into business workflows. What's next for voice AI in india? Looking ahead, Malhotra sees the next 24 months as pivotal. "Voice AI will likely become deeply embedded in workflows rather than remain a standalone tool. Companies that can demonstrate this workflow integration will command premium valuations." He also foresees the emergence of breakthrough consumer applications such as voice companions, therapy, and coaching tools where Indian startups could potentially create globally competitive products, especially given the market's natural comfort with voice-based interactions. Finally, Malhotra believes we'll see the rise of foundational voice AI infrastructure startups that provide the "picks and shovels" enabling the entire ecosystem.
Yahoo
an hour ago
- Yahoo
Alibaba Unveils AI Glasses to Rival Meta's
Alibaba Group Holdings (BABA) unveiled its first AI-powered smart glasses Monday, as the Chinese tech giant launches a wearable with features rivaling those of Meta Platforms' (META) offering with Ray-Ban. Alibaba's U.S.-listed shares, which entered Monday up more than 40% this year, are rising 1.5% in premarket trading. The Quark AI Glasses, the Chinese company's first, will be powered by Alibaba's large language model Qwen and are scheduled for official release in China by the end of the year. Alibaba's eyeglasses will be competing with those recently launched by Chinese smartphone maker Xiaomi. "Lightweight and ergonomically designed, Quark AI Glasses support hands-free calling, music streaming, real-time language translation, and meeting transcription—making them ideal for professionals, travelers, and tech-savvy users,' Alibaba said. Wearable technology such as eyeglasses or smartwatches are increasingly viewed as the next stage in AI gadgets after smartphones. Not all wearable devices have been hits, however: Google Glass was pulled by parent Alphabet (GOOGL) from the market after two failed launches since 2014. Read the original article on Investopedia Sign in to access your portfolio