
AI isn't ready to be your therapist, but it's a top reason people use it
From falling in love with ChatGPT to deepfakes of deceased loved ones, artificial intelligence's potential for influence is vast – its myriad potential applications not yet completely charted. In truth, today's AI users are pioneering a new, still swiftly developing technological landscape, something arguably akin to the birth of social media in the early 2000s.
Yet, in an age of uncertainty about nascent generative AI's full potential, people are already turning to artificial intelligence for major life advice. One of the most common ways people use generative AI in 2025, it turns out, is for therapy. But the technology isn't ready yet.
How people use AI in 2025
As of January 2025, ChatGPT topped the list of most popular AI tools based on monthly site visits with 4.7 billion monthly visitors, according to Visual Capitalist. That dwarfed the next most popular service, Canva, more than five to one.
When it comes to understanding AI use, digging into how ChatGPT is being put to work this year is a good starting point. Sam Altman, CEO of ChatGPT's parent company, OpenAI, recently offered some insight into how its users are making the most of the tool by age group.
'Gross oversimplification, but like older people use ChatGPT as a Google replacement,' Altman said at Sequoia Capital's AI Ascent event a few weeks ago, as transcribed by Fortune. 'Maybe people in their 20s and 30s use it as like a life advisor, and then, like people in college use it as an operating system.'
It turns out that life advice is something a lot of AI users may be seeking these days. Featured in Harvard Business Review, author and Filtered.com co-founder Marc Zao-Sanders recently completed a qualitative study on how people are using AI.
'Therapy/companionship' topped the list as the most common way people are using generative AI, followed by life organisation and then people seeking purpose in life. According to OpenAI's tech titan, it seems that generated life advice can be an incredibly powerful influence.
A Pew Research Center survey published last month reported that a 'vast majority' of surveyed AI experts said people in the United States interact with AI several times a day, if not almost constantly. Around a third of surveyed US adults said they had used a chatbot (which would include things like ChatGPT) before.
Some tech innovators, including a team of Dartmouth researchers, are leaning into the trend.
Therabot, can you treat my anxiety?
Dartmouth researchers have completed a first-of-its-kind clinical trial on a generative AI-powered therapy chatbot. The smartphone app-friendly Therabot has been in development since 2019, and its recent trial showed promise.
Just over 100 patients – each experiencing depressive disorder, generalized anxiety disorder or an eating disorder – participated in the experiment. According to senior study author Nicholas Jacobson, the improvement in each patient's symptoms was comparable to traditional outpatient therapy.
'There is no replacement for in-person care, but there are nowhere near enough providers to go around,' he told the college. Even Dartmouth's Therabot researchers, however, said generative AI is simply not ready yet to be anyone's therapist.
'While these results are very promising, no generative AI agent is ready to operate fully autonomously in mental health where there is a very wide range of high-risk scenarios it might encounter,' first study author Michael Heinz told Dartmouth.
'We still need to better understand and quantify the risks associated with generative AI used in mental health contexts.'
Why is AI not ready to be anyone's therapist?
RCSI University of Medicine and Health Sciences' Ben Bond is a PhD candidate in digital psychiatry who researches ways digital tools can be used to benefit or better understand mental health. Writing to The Conversation, Bond broke down how AI therapy tools like Therabot could pose some significant risks.
Among them, Bond explained that AI 'hallucinations' are known flaws in today's chatbot services. From quoting studies that don't exist to directly giving incorrect information, he said these hallucinations could be dangerous for people seeking mental health treatment.
'Imagine a chatbot misinterpreting a prompt and validating someone's plan to self-harm, or offering advice that unintentionally reinforces harmful behaviour,' Bond wrote. 'While the studies on Therabot and ChatGPT included safeguards – such as clinical oversight and professional input during development – many commercial AI mental health tools do not offer the same protections.'
According to Michael Best, PhD, a psychologist and contributor to Psychology Today, there are other concerns to consider, too.
'Privacy is another pressing concern,' he wrote to Psychology Today. 'In a traditional setting, confidentiality is protected by professional codes and legal frameworks. But with AI, especially when it's cloud-based or connected to larger systems, data security becomes far more complex.
'The very vulnerability that makes therapy effective also makes users more susceptible to harm if their data is breached. Just imagine pouring your heart out to what feels like a safe space, only to later find that your words have become part of a data set used for purposes you never agreed to.'
Best added that bias is a significant concern, something that could lead to AI therapists giving bad advice.
'AI systems learn from the data they're trained on, which often reflect societal biases,' he wrote. 'If these systems are being used to deliver therapeutic interventions, there's a risk that they might unintentionally reinforce stereotypes or offer less accurate support to marginalized communities.
'It's a bit like a mirror that reflects the world not as it should be, but as it has been – skewed by history, inequality, and blind spots.'
Researchers are making progress in improving AI therapy services. Patients suffering from depression experienced an average 51% reduction in symptoms after participating in Dartmouth's Therabot experiment. For those suffering from anxiety, there was an average 31% drop in symptoms. The patients suffering from eating disorders showed the lowest reduction in symptoms but still averaged 19% better off than before they used Therabot.
It's possible there's a future where artificial intelligence can be trusted to treat mental health, but – according to the experts – we're just not there yet. – The Atlanta Journal-Constitution/Tribune News Service
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Malaysian Reserve
8 hours ago
- Malaysian Reserve
Celestine Achi Launches Free AI Readiness Assessment Tool and Maturity Framework to Accelerate Africa's AI Adoption in PR, Media, and Communications
LAGOS, Nigeria, June 1, 2025 /PRNewswire/ — As the AI revolution sweeps across industries worldwide, one African innovator is ensuring the continent doesn't just keep up — but leads. Dr. Celestine Achi, renowned AI educator, PR technology pioneer, and author of AI-Powered PR: The Essential Guide for Communications Leaders to Master Artificial Intelligence, has unveiled a groundbreaking AI Maturity Assessment Framework and AI Readiness Assessment Tool tailored for African organizations and professionals. This dual innovation is designed to democratize access to strategic AI evaluation for businesses, agencies, nonprofits, and public sector entities — with a special focus on public relations, media, and communications professionals. 'AI shouldn't be a privilege for the West. It must be a catalyst for transformation in Africa — starting with those who shape public narratives,' said Celestine Achi, Founder of Cihan Digital Academy and architect of the TABS-D AI Implementation Framework. Empowering Africa's Future-Ready Workforce The AI Readiness Assessment Tool, now freely available at enables individuals and teams to instantly evaluate their current AI capabilities across key pillars such as strategy, skills, systems, and culture. Upon completion, users receive a customized AI readiness report with practical steps for growth — no technical background required. The companion AI Maturity Assessment Framework provides a structured pathway for organizations to transition from AI experimentation to enterprise-level integration. Rooted in real-world case studies and tested across PR agencies, newsrooms, and regulatory bodies, the framework allows African leaders to map their journey across five maturity stages: Nascent, Aware, Engaged, Strategic, and Transformational. Built for Communicators. Designed for Africa. What sets this initiative apart is its deep contextual relevance. Drawing from Celestine's extensive work with media agencies, government communicators, and enterprise brands across Nigeria and beyond, the tools are optimized for African realities — where connectivity, capacity gaps, and talent development remain major hurdles. 'PR and media professionals are the architects of trust. They deserve the right tools to thrive in this intelligent era,' Achi emphasized. 'With this framework, they can now measure, learn, and lead AI transformation — regardless of their current digital maturity.' A Movement, Not Just a Tool Already embraced by industry leaders and professional bodies, the AI Maturity Framework and Readiness Tool are part of the broader AI-Powered PR Ecosystem, a multi-dimensional platform offering: The AI-Powered PR playbook An immersive PR simulation game built on the TABS-D framework Community engagement tools and certification programs To access the free assessment and start your AI journey, visit: About Celestine Achi Celestine Achi (FIIM, MNIPR, ANIMC, Dr. FAIMFIN) is Africa's foremost authority on AI in PR and digital media transformation. He is the author of AI-Powered PR, developer of the TABS-D Framework, and founder of Cihan Digital Academy – a pioneer in AI education for communicators. Photo – View original content:


Free Malaysia Today
14 hours ago
- Free Malaysia Today
Silicon Valley VCs navigate uncertain AI future
ChatGPT and its rivals now handle search, translation, and coding all within one chatbot – raising doubts about what new ideas could compete. (AFP pic) VANCOUVER : For Silicon Valley venture capitalists, the world has split into two camps: those with deep enough pockets to invest in artificial intelligence behemoths, and everyone else waiting to see where the AI revolution leads. The generative AI frenzy unleashed by ChatGPT in 2022 has propelled a handful of venture-backed companies to eye-watering valuations. Leading the pack is OpenAI, which raised US$40 billion in its latest funding round at a US$300 billion valuation – unprecedented largesse in Silicon Valley's history. Other AI giants are following suit. Anthropic now commands a US$61.5 billion valuation, while Elon Musk's xAI is reportedly in talks to raise US$20 billion at a US$120 billion price tag. The stakes have grown so high that even major venture capital firms – the same ones that helped birth the internet revolution – can no longer compete. Mostly, only the deepest pockets remain in the game: big tech companies, Japan's SoftBank, and Middle Eastern investment funds betting big on a post-fossil fuel future. 'There's a really clear split between the haves and the have-nots,' says Emily Zheng, senior analyst at PitchBook, told AFP at the Web Summit in Vancouver. 'Even though the top-line figures are very high, it's not necessarily representative of venture overall, because there's just a few elite startups and a lot of them happen to be AI.' Given Silicon Valley's confidence that AI represents an era-defining shift, venture capitalists face a crucial challenge: finding viable opportunities in an excruciatingly expensive market that is rife with disruption. Simon Wu of Cathay Innovation sees clear customer demand for AI improvements, even if most spending flows to the biggest players. 'AI across the board, if you're selling a product that makes you more efficient, that's flying off the shelves,' Wu explained. 'People will find money to spend on OpenAI' and the big players. The real challenge, according to Andy McLoughlin, managing partner at San Francisco-based Uncork Capital, is determining 'where the opportunities are against the mega platforms.' 'If you're OpenAI or Anthropic, the amount that you can do is huge. So where are the places that those companies cannot play?' Finding that answer isn't easy. In an industry where large language models behind ChatGPT, Claude and Google's Gemini seem to have limitless potential, everything moves at breakneck speed. AI giants including Google, Microsoft, and Amazon are releasing tools and products at a furious pace. ChatGPT and its rivals now handle search, translation, and coding all within one chatbot – raising doubts among investors about what new ideas could possibly survive the competition. Generative AI has also democratised software development, allowing non-professionals to code new applications from simple prompts. This completely disrupts traditional startup organisation models. 'Every day I think, what am I going to wake up to today in terms of something that has changed or (was) announced geopolitically or within our world as tech investors,' reflected Christine Tsai, founding partner and CEO at 500 Global. In Silicon Valley parlance, companies are struggling to find a 'moat' – that unique feature or breakthrough like Microsoft Windows in the 1990s or Google Search in the 2000s that's so successful it takes competitors years to catch up, if ever. When it comes to business software, AI is 'shaking up the topology of what makes sense and what's investable,' noted Brett Gibson, managing partner at Initialized Capital. The risks seem particularly acute given that generative AI's economics remain unproven. Even the biggest players see a very uncertain path to profitability given the massive sums involved. The huge valuations for OpenAI and others are causing 'a lot of squinting of the eyes, with people wondering 'is this really going to replace labor costs'' at the levels needed to justify the investments, Wu observed. Despite AI's importance, 'I think everyone's starting to see how this might fall short of the magical' even if its early days, he added. Still, only the rare contrarians believe generative AI isn't here to stay. In five years, 'we won't be talking about AI the same way we're talking about it now, the same way we don't talk about mobile or cloud,' predicted McLoughlin. 'It'll become a fabric of how everything gets built.' But who will be building remains an open question.


New Straits Times
18 hours ago
- New Straits Times
AI oversight needed to ensure fairness, accountability, and inclusivity, says Lee Lam Thye
KUALA LUMPUR: The Alliance for a Safe Community has called for clear, forward-looking regulations and a comprehensive ethical framework to ensure artificial intelligence (AI) development prioritises fairness, accountability, and inclusivity. "This means avoiding bias in decision-making systems, ensuring that AI enhances human potential rather than replacing it, and making its benefits accessible to all, not just a select few," said chairman Tan Sri Lee Lam Thye in a statement today. The group proposed a regulatory framework including AI accountability laws, transparency and explainability for AI decision-making that impacts individuals, strengthened data protection and privacy standards, risk assessment and certification requirements, and the creation of public oversight bodies. The group also proposed the establishment of a Code of Ethics that is human-centric, non-discriminatory, fair, honest, environmentally responsible, collaborative, and inclusive. He warned that while AI holds promise for healthcare innovations and environmental sustainability, its use must always serve the greater good. Key risks include privacy breaches, algorithmic bias, job displacement, and the spread of misinformation, Lee added. "We urge policymakers, tech leaders, civil society, and global institutions to come together to build a framework that ensures AI is safe, inclusive, and used in the best interest of humanity," Lee added. The group concluded with a warning against a future where technology dictates the terms of our humanity, and called for a path where AI amplifies best qualities for the benefit of all. On Wednesday, Prime Minister Datuk Seri Anwar Ibrahim said the government plans to push for a new legislation aimed at reinterpreting sovereignty in light of the rapid growth of artificial intelligence (AI) and cloud-based technologies. Anwar added that following the evolving role of governance in the digital era, the traditional notions of sovereignty, designed for a pre-digital world, must be reconsidered to accommodate new technological realities.