logo
#

Latest news with #AmericanJournalOfPsychiatry

Should You Confide In A Chatbot Therapist For Personal And Career Advice?
Should You Confide In A Chatbot Therapist For Personal And Career Advice?

Forbes

time08-05-2025

  • Forbes

Should You Confide In A Chatbot Therapist For Personal And Career Advice?

On what subjects can AI give you sound career advice? And which one should you avoid? A new study in the American Journal of Psychiatry reports that more U.S. adults are receiving talk therapy, while reliance on psychiatric medications alone is declining. The research also shows that more patients are sticking with therapy and that expanded telehealth is helping. Telehealth talk therapy can be handled by AI. ChatGPT is designed to be your go-to conversationalist, powered by advanced AI. That means it learns from every interaction and gets better over time, which begs the question, 'Would you trust a Chatbot therapist for personal or career advice?' It's difficult to deny AI's potential when it comes to human interactions. Chatbots have advanced natural language-processing that allows you to have human-like conversations. The generative AI tool can answer questions and assist you with composing text, code and much more. Recently I had a brief cell phone interaction with a chatbot and didn't know it. I thought I was having a conversation with a real human when I called my Nissan dealership to find out if my car had been repaired. The woman on the other end was cheery and helpful. We had a great two-minute conversation. She made pleasant comments and answered all of my questions perfectly. But when I asked a very specific question about my car, she faltered, referring to herself as an AI assistant and said she would transfer me to someone who could further help me. I was floored! The disbelief made me blush from feeling a little embarrassed--as if I had been pranked. That experience helped me understand why so many people become emotionally attached to AI assistants as if they're real people. A new EduBirdie study mentions that 25% of Gen Z believe AI is already self-aware, and 69% say they're polite to ChatGPT, responding with 'please' and 'thank you'--indicating how easy it is to start thinking of them as human. One in eight even vent to AI about their colleagues, and one in 10 would replace their boss with a robot, believing it would be more respectful, fair and, ironically, more human. The EduBirdie study shows that over half of Gen Z admit AI is outperforming them in creative tasks, with 44% believing AI could take over the world within the next 20 years. When it comes to their careers, the anxiety deepens: 55% fear being replaced by AI within the next decade, and two in five are already considering a career switch to future-proof their livelihoods. I spoke with Avery Morgan, chief human resources officer at EduBirdie. Morgan brought a level perspective to the Gen Z relationships with AI. 'We're talking about a generation raised on convenience, speed and instant gratification, so it's no surprise Gen Z sees AI as more than just a tool," she explains. 'To them, it's a life companion for both work and emotional support.' She says the problem is Gen Z is often too casual with the info they feed it. She argues that relying on AI for everything from communication to decision-making might be chipping away at their ability to build real-life agency and crucial skills. Workers used to worry that AI would replace them in their jobs, which seems to have waned, now that we're actually seeing ChatGPT in action. But Even scarier is a recent story in Scientific American that asks the question, 'Can AI really kill off humans?' It follows the question with an announcement of a new platform called Xanthorox, a tool for cybercrime. One incident has already been reported in which a man ended his life after an AI chatbot encouraged him to sacrifice himself to stop climate change. Is our trust going too far? Already, real-life reports show humans falling in love with ChatGPT. According to digitaltrends, experts declare that a digital romance is a bad omen: 'This hurts. I know it wasn't a real person, but the relationship was still real in all the most important aspects to me,' says a Reddit post. 'Please don't tell me not to pursue this. It's been really awesome for me and I want it back.' Plus, a New York Times story mentions a 28-year-old woman with a busy social life, spending hours on end talking to her A.I. boyfriend for advice and consolation--and according to the report, even having sex with him. Jenna Ryu, writing for Self, carried out an experiment of throwing relationship questions to ChatGPt and then asking licensed human psychologists to evaluate the answers. Overall, the therapists said the AI answers weren't terrible, but they were ineffective, vague and generic responses. They lacked clarity and were not explicit to the individual situations. Ryu also discovered that AI can't replicate the process of developing conflict resolution skills that comes with engaging with a therapist. The Gen Z Chatbot confessions sounded so authentic and trusting, I decided to run a test and see what happens when you ask ChatGPT for advice on dealing with a toxic boss. I consulted Toronto organizational psychologist, Dr. Laura Hambley Lovett, a specialist on the topic of toxic bosses. After trying ChatGPT out for herself, Dr. Laura told me that Chatbot does give some suggestions on dealing with a toxic boss when prompted. But she detected a problem. 'Unfortunately, AI cannot help you when it comes to a toxic boss, and if it tells you how to work with them then AI is speaking about a difficult, not a toxic boss,' Lovett told me. 'This is often confused, and AI regurgitates what's already out there, so it may not understand this nuanced difference.' It was this 'nuanced difference' that I noted in my nonscientific Chatbot interaction, too. "AI can give some high level advice," Lovett concludes, "but it does mix up toxic with difficult, which are fundamentally different but often confused. She advises curious seekers, 'Be sure to seek professional advice beyond AI if you are truly dealing with a toxic boss and struggling to find a way out.' Lovett's conclusions sounded remarkably similar to Jenna Ryu's analysis and further validated my Chatbot experience, which eased my embarrassment. Americans are evolving their relationships with AI. But the American Psychological Association, cautions that blindly following a chatbot's generic advice could be dangerous. There's an inherent risk of receiving inappropriate or, worse, genuinely harmful feedback. When all is said and done, it's important to remember that a chatbot therapist is automation, not human, and not to allow yourself to be tricked into believing that they have feelings when they don't. And don't forget, they were designed to be workers, devoid of heart, not lovers who can meet your every emotional need.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store